care ddb me training toolkit_final

46
MONITO MONITO MONITO MONITO TRA TRA TRA TRA A How-To-G Quality & CARE KENY Pro Obando Ekesa, Research, Ev Mich Page 1 of 46 ORING ORING ORING ORING & EVALUATIO & EVALUATIO & EVALUATIO & EVALUATIO AINING TOOLKIT AINING TOOLKIT AINING TOOLKIT AINING TOOLKIT Guide for use by Prog & Learning Unit (PQLU YA - Refugee Assistan ogram, Dadaab. Compiled by: valuation and Community Development Consult hael Ochieng, AMREC Consultants ON ON ON ON gram U) nce tant and;

Upload: georgen1

Post on 15-Jul-2015

41 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: CARE Ddb ME Training Toolkit_Final

MONITORING MONITORING MONITORING MONITORING

TRAINING TOOLKITTRAINING TOOLKITTRAINING TOOLKITTRAINING TOOLKIT

A How-To-Guide for use by Program

Quality & Learning Unit (PQLU)

CARE KENYA

Program, Dadaab.

Obando Ekesa, Research, Evaluation and Community

Michael Ochieng,

Page 1 of 46

MONITORING MONITORING MONITORING MONITORING & EVALUATION& EVALUATION& EVALUATION& EVALUATION

TRAINING TOOLKITTRAINING TOOLKITTRAINING TOOLKITTRAINING TOOLKIT

Guide for use by Program

Quality & Learning Unit (PQLU)

CARE KENYA - Refugee Assistance

Program, Dadaab.

Compiled by:

Research, Evaluation and Community Development Consultant

Michael Ochieng, AMREC Consultants

& EVALUATION& EVALUATION& EVALUATION& EVALUATION

Guide for use by Program

Quality & Learning Unit (PQLU)

Refugee Assistance

Development Consultant and;

Page 2: CARE Ddb ME Training Toolkit_Final

Page 2 of 46

Acknowledgements This M&E Training Toolkit was compiled by Obando Ekesa (Research, Evaluation and Community

Development Consultant) and Michael Ochieng (AMREC Consultants). It is a product of the M&E Training

conducted by these consultants for CARE Kenya staff in Dadaab, Refugee Assistance Program in December,

2014. Much appreciation goes to ECHO for continued support and especially for facilitating the M&E training

and the development of this manual.

Page 3: CARE Ddb ME Training Toolkit_Final

Page 3 of 46

TABLE OF CONTENTS

ACKNOWLEDGEMENTS ..................................................................................................................................................2

INTRODUCTION .............................................................................................................................................................4

BACKGROUND INFORMATION .................................................................................................................................................. 4

PURPOSE AND SCOPE OF THE TOOLKIT ....................................................................................................................................... 4

TARGET/USER FOR THE TOOLKIT .............................................................................................................................................. 5

LIMITATIONS OF THIS TOOLKIT ................................................................................................................................................. 5

MONITORING & EVALUATION TRAINING IN THE DADAAB CONTEXT .............................................................................5

M&E TRAINING SESSIONS ..............................................................................................................................................6

TRAINING OBJECTIVES ............................................................................................................................................................ 6

TRAINING METHODOLOGY ...................................................................................................................................................... 6

TRAINING CONTENT ............................................................................................................................................................... 7

Preliminaries ................................................................................................................................................................ 7

Topics ........................................................................................................................................................................... 8

Topic 1: Introduction – Health& relationship to CARE’S mandate ............................................................................................. 8

Topic 2: Concepts and Principles of Planning, Monitoring and Evaluation .............................................................................. 13

Topic 3: The M&E System.......................................................................................................................................................... 20

Topic 4: Standards and Indicators ............................................................................................................................................. 27

Topic 5: Developing M&E tools and Data Management ........................................................................................................... 32

REFERENCES ................................................................................................................................................................ 43

ANNEXES ..................................................................................................................................................................... 44

ANNEX I: PRE/POST-TEST QUESTIONS ..................................................................................................................................... 44

ANNEX II: MAP OF DADAAB .................................................................................................................................................. 46

Page 4: CARE Ddb ME Training Toolkit_Final

Page 4 of 46

INTRODUCTION

Background Information

CARE International in Kenya (CIK) is a development and humanitarian organization with a goal to reduce

poverty at the household level and provide relief in emergencies. CIK has been operational in Kenya since

1968 with its Country Office in Nairobi. CIK implements programs in priority regions of Nyanza Province (with

a sub-office in Kisumu), Kibera in Nairobi, North Eastern Province (sub-offices in Garissa, Dadaab, Takaba and

Marsabit) and Embu in Eastern province. CIK carries out major initiatives in Health, HIV and AIDS,

Livelihoods, Group Savings and Loans, Emergency assistance, and Refugee Assistance Program

The Refugee Assistance Program has been operational in Dadaab since 1991, when the Dadaab Refugee

Camps were set up following the outbreak of civil war in Somalia, which led to the collapse of former

president Siad Barre’s government and displacement of hundreds of thousands of Somalis, many of whom

fled into Kenya. From 1991, CARE has been one of the major implementing partners for UNHCR and WFP and

has provided relief and development assistance for the three main refugee camps in Dadaab in addition to

supporting host communities around the camps.

Dadaab is approximately 80km from the Somalia border and the refugees have in the past been settled in

three camps namely: Ifo, Dagahaley and Hagadera. In the course of 2011, the camps experienced a high

influx of refugees as a result of the protracted war and famine in Somalia, which led to the creation of two

new camps, Kambioos and Ifo 2. As at December 23, 2012, the total population of Dadaab camps was

449,815 but the number had reduced to 339,962 as at October 31, 2014 according to UNHCR statistics.

The majority of the refugees are Somalis who comprise over 94% of the total refugees in Dadaab. The other

refugee populations by nationality are: Ethiopians (4.2%) with the rest coming from Sudan, Eritrea, Uganda,

Congo, Rwanda, Burundi and Zimbabwe.

The Refugee Assistance Program (RAP) is implemented in four program sectors based on thematic areas of

response to the refugees’ needs. These sectors are: Water, Sanitation and Hygiene (WASH), Food

Distribution and Logistics, Education and Gender and Community Development (GCD). WASH Sector has two

subsectors, namely: Public Health Engineering (PHE) and Public Health Promotion (PHP). FSL is responsible

for food distribution to the refugees and provides logistical support of fuel handling and distribution of non-

food items (NFIs) or common relief items (CRIs) to beneficiaries.GCD has four sub sectors, namely:

Livelihood; Psychosocial Unit (PSU); Youth, Sports and Development (YSD); and Gender and Development

(GAD).

Purpose and Scope of the Toolkit

Monitoring and evaluation (M&E) is an essential component of any intervention, project, or program and it

is important for various reasons such as helping program implementers make informed decisions and

ensuring effective and efficient use of resources.

As part of ensuring achievement of CI’s programming principles numbers 1, 3 & 6 – promoting

empowerment; ensuring accountability & promoting responsibility; and seeking sustainable results – this

manual was developed following a monitoring and evaluation training conducted December 1 – 4, 2014by

the consultants (Obando Ekesa and Michael Ochieng), as a guide to be used by the Program Quality and

Learning Unit (PQLU)in CARE’s Refugee Assistance Program for further training of staff (both national and

refugee incentive staff).

The toolkit therefore is designed as a guiding framework, which goes one step further beyond the traditional

generic M&E training that often describes how to conduct a monitoring and evaluation training, to a special

focus detailing the content tailored to Dadaab Refugee Context’s needs so as to guide the PQLU in

undertaking M&E training. However, the manual still focuses on the most common aspects of M&E but

relates all this to the Dadaab context. The emphasis of this manual is on the applicability of the M&E

Page 5: CARE Ddb ME Training Toolkit_Final

Page 5 of 46

processes to the CARE Dadaab context with the aim of ensuring effectiveness and efficiency of project

implementation and the ultimate achievement of results, which is the paradigm shift from traditional-based

M&E (i.e. activity and output-based M&E) to results-based M&E (which focuses on outcomes and impacts).

It should be noted, however, that the manual is neither prescriptive nor exhaustive but to be used both as a

complementary and supplementary guide to enhance the project implementation process and add to CARE

Kenya, Refugee Assistance Program’s existing M&E system.

Target/User for the Toolkit

This toolkit is meant to be used by Program Quality and Learning Unit and/or program

managers/coordinators who are interested in training their staff, particularly refugee incentive workers.

However, the manual can also be used to train trainers of trainers (ToTs).

Limitations of this Toolkit This toolkit, first, takes cognizance that the M&E training will involve a diversity of participants who have

different levels of M&E knowledge. It therefore tries to cater for the needs of the new, basic and advanced

learners, which is part of CARE Refugee Assistance Program’s strategic direction of empowering the refugee

staff through integration. Secondly, it does not give a comprehensive description of the M&E processes nor

describes how to conduct the training, but outlines the content and tries to make the training applicable

hence the focus on CARE’s programming principles and examples from the Dadaab refugee context. Lastly,

the manual takes cognizance of the fact that M&E is a very wide topic, studied at higher levels of education

and what is contained herein is basic understanding aimed at laying foundations for effective program

delivery. Consequently, the manual has room for further refinement as it is used by PQLU.

MONITORING & EVALUATION TRAINING IN THE DADAAB CONTEXT Undertaking monitoring and evaluation (M&E) training may be quite different in the Dadaab context as

opposed to other settings, since there’s an emphasis on the involvement of refugee incentive staff as part of

CIK’s long range strategic focus of longer-term strategic commitments to context specific to impact

populations with a clearly articulated theory of change. Consequently, CARE will gradually and deliberately

develop its resources and capacity to support, catalyze, facilitate and advocate for social change by working

with and through alliances and building the capacity of local organizations to ensure a more efficient

allocation and use of national resources for the vulnerable and poor communities.

In tandem with the above, CARE Refugee Assistance Program’s Strategy will focus its interventions on

enhancing the capacity of the refugees to deliver services in the four program sectors, namely WASH, FSL,

Education and GCD. Based on the recent political developments in the country where there’s an interest on

the repatriation of the refugees and the wave of insecurity, there’s an even greater need to empower the

refugee community to take a greater role in the implementation of projects. This can be done by allowing

them to take the driver’s seat by:

� Exploring an area, learning about key problems and opportunities.

� Planning research or development interventions.

� Investigating one key problem or specific topic.

� Involving the local people in research and planning.

� Monitoring and evaluating a research or development activity.

� Dealing with conflicting differences with different groups (conflict resolution).

The M&E training is therefore both an instrument and a process that can help address the various challenges

in the Dadaab refugee context and one that is especially useful in involving all beneficiaries as key

stakeholders who are part of achieving CI’s programming principles of empowering communities. This is also

part of the larger context of ensuring community ownership. However, it is acknowledged that creating

community ownership is a continuous process that will require both patience and persistence if lasting

change is to be seen within the Dadaab Refugee context.

Page 6: CARE Ddb ME Training Toolkit_Final

Page 6 of 46

M&E TRAINING SESSIONS The training sessions below are based on the M&E Training conducted to CARE’s Refugee Assistance

Program staff on December 1 – 4, 2014. The training’s goal/aim was: “To build the capacity of WASH and

other selected staff on monitoring and evaluation, [thereby] enhancing effective project delivery.”

Training Objectives

The generic specific objectives, to achieve from undertaking a training using this manual are:

1. To improve knowledge of the staff on M&E, its concepts and its importance.

2. To enhance the knowledge and skills of trainees in developing and using monitoring & evaluation

system (e.g. developing indicators, M&E frameworks and plans, data collection, management and

analysis etc.) to enhance project effectiveness and efficiency.

3. To enhance trainees ability to use data for project/program decision-making, information sharing,

and documentation of best practices and lessons learnt.

Training Methodology

It is important to note that the participants will have varied M&E knowledge, and consequently the trainers

need to ensure the training commences with basic understanding of monitoring & evaluation and its

concepts and progress to more complex concepts/topics. In pedagogy (teaching), this is a teaching tenet

known as “starting from the known to the unknown.”

For an effective participatory training, it is imperative to have a mix of learning/teaching methods to ensure

the overall aim (i.e. effective implementation and ultimately sustainability of the projects) is achieved, can

be monitored and outcomes documented. It is therefore recommended for the trainers to use a mix of

engaged pedagogical methods – from learner-centered teaching, active learning (involving in class

exercises), discussion/group work strategies, experiential learning and where possible simulations.

Learner-centered teaching is a paradigm shift concept whose teaching/learning methods shift the role of the

instructors from givers of information to facilitating participants’ learning. This method is cognizant that the

trainees are not merely “empty slates” waiting to be filled with knowledge but are learners with a rich

experience, which only need to be harnessed towards a better understanding of their respective roles in

monitoring and evaluation of program activities. Learner-centered teaching ensures the participants are not

passive but active learners who will use their skills, interests and abilities during the training.

In class exercises and discussion/group work strategies are techniques which allow the trainees to engage in

critical thinking so that they can actively and skillfully conceptualize, apply, analyze, synthesize, and evaluate

information presented to them during and after the training to reach logical programmatic conclusions.

Similarly, experiential learning techniques appreciate the various experiences the trainees’posses and it is

incumbent upon the trainers to guide these towards engendering active learning. This is particularly

important for such type of trainings because experiential learning engages the participants in critical

thinking, problem solving and decision making in contexts that are personally relevant to them. This

approach to learning also involves making opportunities for debriefing and consolidation of ideas and skills

through feedback, reflection, and the application of the ideas and skills to new situations.

For practical purposes, it is advisable that the planning of the training is done by incorporating case studies

from projects which the trainees are familiar with, and to use the same throughout the duration of the

training. The can range from preparing an assignment for group work to looking for aspects to critique to

Page 7: CARE Ddb ME Training Toolkit_Final

Page 7 of 46

ensure a better program design in future. This helps the participants to have a holistic understanding of M&E

throughout the training and throughout the project life cycle.

The delivery of the teaching/learning methods can be done through lecture method, plenary discussions,

group work presentations and simulations, where possible.

Training content

Because of CARE’s shift towards integration which subsequently creates a variety of staff – both national and

refugee incentive staff – within the Dadaab Context, it is therefore right to assume, in most scenarios, that

the participants will also have varied backgrounds on monitoring and evaluation knowledge. The trainers

should strive to have an iterative process of training – commencing with basic M&E concepts as more

complex content, such as data analysis, if applicable, is introduced – as this ensures content not understood

in a session can be repeated. What is important here is to engender learning and not merely rushing through

to complete the planned content.

In line with the above generic specific objectives, the content to be covered will help meet the following

standard outcomes. Consequently, at the end of the training, the participants should be able to:

• Understand the basic purpose, fundamental principles and scope of monitoring and evaluation.

• Correlate the importance of M&E to the achievement of program/project results

• Apply the M&E principles to their work as project staff to facilitate increased efficiency and

effectiveness in the ECHO-funded WASH project and subsequent future projects.

• Participate in the project’s essential M&E functions, such as developing and/or critiquing indicators,

collecting data and effectively analyzing to inform further programming.

• Develop tools for monitoring and evaluating the WASH program.

• Understand the application of M&E in the entire project lifecycle to fully integrate M&E in the WASH

program.

Preliminaries

Prior to the commencement of any training, there are usually preliminary issues to be tackled. These include,

but are not limited to:

• Welcome and brief overview of training – this is to be done by the organizers of the training and where

possible the senior most person in the organization at the time of the training.

• Introduction – to enable the facilitators achieve the most of the training, it is advisable that the

participants not only mention their names but add more information such as their sectors/departments

and most importantly their experiences, if any, on monitoring and evaluation. The facilitators are free to

solicit for more information here such as likes and dislikes of each individual participant, but this is done

at their own discretion.

• Expectations – it is important to know the participants’ expectations and it is advisable to pass sticky

notes or pieces of paper where the participants write at least one expectation. This is more preferable to

asking the participants to state their expectations since it allows them to honestly state their

expectations without the fear of being laughed at incase one’s expectations appear “weird” to the

others.

• Setting training rules – this is to be done through discussion to allow consensus among the participants

on what each person should abide by during the duration of the training.

Page 8: CARE Ddb ME Training Toolkit_Final

Page 8 of 46

• Pre-test evaluation – this is often done to gauge the participants’ level of knowledge. It is imperative to

grade the participants and subsequently tailor the training based on the scores. The questions are

developed to ensure they cover the content of the training and analyzing the performance of each

question helps to establish which areas to emphasize on during the training. A sample pre-test/post-test

questionnaire is annexed in this manual.

Topics

Based on the objectives and outcomes of the training, the content can be categorized into five (5) main

topics:

Topic 1: Introduction – Health& relationship to CARE’S mandate.

The aim of this topic is to allow the trainers begin with a “big picture” perspective and lay the foundations

upon which programs/projects are conceptualized. The following modules should be covered:

� Overview of CARE International (CI) and CARE International in Kenya (CIK) and areas of mandate.

The areas to be covered include: CI Vision and Mission and CI programmatic areas. This is best done

through question and answer to determine what the participants know about the organization’s vision

and mission. The trainers should focus on why it is important for participants to know these ideals and

how applicable it is to their work.

CI Vision - we seek a world of hope, tolerance and social justice, where poverty has been overcome &

people live in dignity and security. It is important to help participants focus on the key words and not

necessarily know the entire statement as this helps them internalize.

CI Mission - To serve individuals and families in the poorest communities in the world. CI seeks to

facilitate lasting change by:

� Strengthening capacity for self-help;

� Providing economic opportunity;

� Delivering relief in emergencies;

� Influencing policy decisions at all levels;

� Addressing discrimination in all its forms.

CIK’s Vision - We will work with Kenyan’s to influence and implement policies and programmes that

reduce poverty.

CIK’s Mission - CIK’s purpose is to reduce poverty at the household level and to provide emergency relief.

This is done through:

� Addressing the underlying causes of poverty

� Building capacity for self-reliance

� Working in partnership with stakeholders – at community & national levels

� Programming based on sound analysis, innovation, research & learning

� Addressing all forms of injustice at all levels

After presenting these, it is important to ask the participants:

� Why is it important to know these visions and missions?

� How can we individually and collectively actualize these ideals?

Page 9: CARE Ddb ME Training Toolkit_Final

Page 9 of 46

� Overview of Health and its relation to CARE’s Mandate.

Discuss with participants the definitions of healthand how CARE’s overall mandate leads to the

achievement of health. It is important to widen to scope of health, for instance the 1986 Ottawa Charter

for Health Promotion identified prerequisites for health promotion as:

� Peace

� Education

� Income

� Sustainable Resources

� Shelter

� Food

� A stable ecosystem

� Social Justice and Equity

� Introduction to the project –this varies depending on which groups of participants are being trained, but

it is done generally to help the participants get a “big picture” perspective in terms of the objectives and

intended results of the project. It is important to focus on how the participants’ individual roles are

connected to the achievement of the result(s) in the project. For instance, in the ECHO-funded project,

the diagrammatic representation below was used.

Figure 1: Simplified results framework for ECHO-funded project

ECHO-funded WASH Project

Title: Maintenance and Improvement of Water Supply, Sanitation & Hygiene for Refugees in

Dadaab Camps, Kenya

Objective: To improve water, sanitation and hygiene standards amongst refugees in targeted

Targeted Camps

1. Ifo

2. Dagahaley

Result 1:Minimum

acceptable standards for

water supply are

maintained in Dadaab

Refugee camps.

What are these minimum

standards for H2O supply?

Results

Result 2: Excreta disposal

is improved (environment

free from faeces,

appropriate and adequate

toilet facilities)

What does this entail?

Result 3: Target group

have capacity to apply

knowledge & skills to

improve on management

and delivery of WASH

services

What does this entail?

Result 4:Basic hygiene is

improved

What does this entail?

Page 10: CARE Ddb ME Training Toolkit_Final

Page 10 of 46

� The role of the CIK RAP staff in the achievement of these (CI, CIK, Health, project) objectives and

results – to cap the session, it is important to discuss with the participants how they all individually and

collectively contribute to the realization of the above. To reinforce this, it important to draw the

participants’ attention to the CI Programme Standards Framework since it relates CI’s vision and mission

to selected principles, standards, and guidelines that CI members agree should inform and shape all

CARE programmes& projects.

The aim of the Programme’s Standards Frameworkis:

CARE Programmes& projects should propose strategies that lead to lasting impact on the lives of poor

people & communities. They should do so in way that conforms to the purpose CI describes itself in its

vision & mission. The standards framework is graphically shown below as:

As part of the training, it is good to highlight the CI Programming Principles, and where possible,

expound upon them further with the participants’ input. These are:

1. Promote Empowerment

2. Work with partners

3. Ensure accountability and promote responsibility

4. Address discrimination

5. Promote the non-violent resolution of conflicts

6. Seek sustainable results

The Project Standards apply to all CARE programming (including emergencies, rehabilitation and

development) and all forms of interventions. The trainers should emphasize that the standards, as well

as accompanying guidelines, should be used to guide the work of project designers; as a checklist for

approval of project proposals; as a tool for periodic project self-appraisal; and as a part of project

evaluation. The emphasis should not be only on enforcement but also on the strengthening of capacity

to be able to meet these standards for programme quality.

Page 11: CARE Ddb ME Training Toolkit_Final

Page 11 of 46

Each CARE projects should:

1. Be consistent with the CARE International Programming Principles.

2. Be clearly linked to a Country Office strategy and/or long term programme goals.

3. Ensure the active participation and influence of stakeholders in its analysis, design,

implementation, monitoring and evaluation processes.

4. Have a design that is based on a holistic analysis of the needs and rights of the target population

and the underlying causes of their conditions of poverty and social injustice. It should also

examine the opportunities and risks inherent in the potential interventions.

5. Use a logical framework that explains how the project will contribute to an ultimate impact upon

the lives of members of a defined target population.

6. Set a significant, yet achievable and measurable final goal.

7. Be technically, environmentally, and socially appropriate. Interventions should be based upon

best current practice and on an understanding of the social context and the needs, rights and

responsibilities of the stakeholders.

8. Indicate the appropriateness of project costs, in light of the selected project strategies and

expected outputs and outcomes.

9. Develop and implement a monitoring and evaluation plan and system based on the logical

framework that ensures the collection of baseline, monitoring, and final evaluation data, and

anticipates how the information will be used for decision making; with a budget that includes

adequate amounts for implementing the monitoring and evaluation plan.

10. Establish a baseline for measuring change in indicators of impact and effect, by conducting a

study or survey prior to implementation of project activities.

11. Use indicators that are relevant, measurable, verifiable and reliable.

12. Employ a balance of evaluation methodologies, assure an appropriate level of rigor, and adhere

to recognized ethical standards.

13. Be informed by and contribute to ongoing learning within and outside CARE.

It is important to reiterate these Project Standards because they are directly and proportionally related

to monitoring and evaluation and as such they are pivotal in the M&E training since they form the basis

of implementation of VARE’s activities.

Page 12: CARE Ddb ME Training Toolkit_Final

Page 12 of 46

Note: It is important that the introduction is well covered and given more time as it lays the foundation to

the entire training. It will also help the participants to synthesize their individual responsibilities in their job

description and triangulate the training by incorporating the bigger picture of CARE’s mandate in health & its

relation to WASH, and correlate it to the trainees’ roles in the achievement of better or improved health in

the refugee context and their subsequent roles in monitoring and evaluation as conceptualized below.

Source: Obando Ekesa

CARE's Mandate in Health Outcomes

Role of projecct staff in acheiving Health

& Project Outcomes for Refugees

Monitoring & Evaluation role of Project Staff in

achieving Health & Project Outcomes in RAP

program

Page 13: CARE Ddb ME Training Toolkit_Final

Page 13 of 46

Topic 2: Concepts and Principles of Planning, Monitoring and Evaluation

The aim of this topic is to introduce the concepts of M&E and its importance to the trainees. The following

modules should be covered:

� Introduction to Monitoring and Evaluation – this covers the definitions, importance of M&E, the

complementary roles of monitoring and evaluation. Since, in most cases, the trainees usually have a

basic understanding of the above concepts; it is advisable to make this a group work session.

The group work activity can be done as follows:

In groups of 5 people each, answer the following questions. Choose a secretary who will make the

presentation

o What is the purpose of carrying out M&E in your project/sector?

o Who needs, uses M&E information in your project/sector?

o Who carries out M&E in your project/sector?

o How is M&E carried out in your project/sector?

o When should M&E be carried out?

After the group presentations, it is feasible to lay the foundation of M&E. Again, to reiterate topic 1

above, it is good to connect M&E to actual figures from a CARE perspective. For instance, one can

use CARE’s 2020 Strategy which is laden with statistics and use the same to lead to discussions in

M&E. CARE’s 2020 Strategy is a CI-wide vision documenting how CARE can become more relevant &

efficient to achieve a greater impact on the lives of poor vulnerable women and men. The purpose of

the 2020 strategy is to focus CARE programs to clarify – both internally & externally – how CARE will

contribute to eliminating poverty and social injustice.

CARE commits to achieve the follow outcomes by 2020:

Page 14: CARE Ddb ME Training Toolkit_Final

Page 14 of 46

Source: CARE International 2020 Strategy Document

To ensure critical thinking and not necessarily “spoon feed” participants, it is important to ask participants

the following questions:

- How did CI arrive at these figures?

- How will CI ensure they achieve these outcomes?

Engaging participants in discussion on the above leads the trainers to lay the foundation of M&E (i.e.

definitions, importance of M&E, who uses M&E etc.) in the training and responses by participants can be

referred to throughout the training.

Definitions

Monitoring - involves collection of routine data that measure progress towards achieving program/project

objectives. It is used to track changes in program/project performance over time. The purpose of monitoring

is to permit stakeholders to make informed decisions regarding the effectiveness of programs/projects and

efficient use of resources. Monitoring is sometimes called process evaluation.

Monitoring is therefore an ongoing, continuous process, which requires the collection of data at multiple

points throughout the program/project cycle, including at baseline. It can be used to determine if activities

need adjustment during the intervention to improve desired outcomes.

Evaluation - measures how well program/project activities have met expected objectives &/or the extent to

which changes in the program/project can be attributed to the program/project/ intervention.

Evaluations therefore require: data collection at the start of the program (baseline data) and at the end; a

control or comparison group; and a well-planned study design.

Page 15: CARE Ddb ME Training Toolkit_Final

Page 15 of 46

Types of evaluations:

� Process Evaluation

� Outcome Evaluation

� Impact Evaluation

What does evaluation address?

� “Why” – what caused the changes we are monitoring?

� “How” – What was the sequence or processes that led to successful (or not) outcomes?

� “Compliance/Accountability” – Did the promised activities actually take place and as they were

planned?

� “Process/implementation” – Was the implementation process followed as anticipated, and with

what consequences

Evaluation Criteria1 - the criteria inform what to evaluate, (i.e. the focus of inquiry). They are

complementary, and together they seek to provide a comprehensive evaluation. The criteria are based on

internationally recognized practices, and include:

� Relevance & appropriateness - Relevance and appropriateness are complementary criteria used to

evaluate an intervention’s objectives and wider goal. Relevance focuses on the extent to which an

intervention is suited to the priorities of the target group, (i.e. local population and donor). It also

considers other approaches that may have been better suited to address the identified needs. The

validity of design is an important element of relevance. This refers to the logic and coherence of the

design of the intervention, (i.e. project or programme), and that its planned (or modified) objectives

remain valid and appropriate to the overall goal/s. Appropriateness focuses on the extent to which

an intervention is tailored to local needs and context, and compliments other interventions from

other actors. It includes how well the intervention takes into account the economic, social, political

and environmental context, thus contributing to ownership, accountability, and cost-effectiveness.

When applicable, it is particularly important that the evaluation function supports a community’s

own problem-solving and effective decision-making to address local needs, and build community

capacity to do so in the future.

� Efficiency - Efficiency measures the extent to which results have been delivered in the least costly

manner possible. It is directly related to cost-effectiveness – how well inputs, (i.e. funds, people,

material, and time), are used to undertake activities and are converted to results. It is typically based

upon an intervention’s stated objectives and the processes by which they were pursued, analyzing

the outputs in relation to the inputs and their respective indicators. It includes whether the results

or benefits justify the cost, and can compare alternative approaches to achieving the same results to

determine whether the most efficient processes have been adopted. It is closely related to

effectiveness and the measurement of performance.

� Effectiveness - Effectiveness measures the extent to which an intervention has or is likely to achieve

its intended, immediate results. It is based upon an intervention’s objectives and related indicators,

typically stated in a logical framework. However, the assessment of effectiveness should not be

limited to whether an intervention has achieved its objectives, but also to identify the major reasons

and key lessons to inform further implementation or future interventions. When relevant, this

should include a comparison with alternative approaches to achieving the same results. Key

elements of effectiveness include:

� Timeliness. Evaluations should assess to what extent services and items were delivered in a

timely manner, and to what degree service provision was adequately supported to achieve

objectives on schedule.

� Coordination. This refers to how well various parts of an intervention, often involving

multiple actors, were managed in a cohesive and effective manner. This is particularly

relevant, where disaster response or longer-term development initiatives often involve

1 Adopted from IFRC Framework for Evaluation (www.ifrc.org)

Page 16: CARE Ddb ME Training Toolkit_Final

Page 16 of 46

multiple National Societies, local and national governments and institutions, and other

partners.

� Trade-offs. Evaluations should assess the effect of decisions made during the intervention

that may alter the goals or priorities in acknowledged or unacknowledged ways.

� Stakeholder perspectives. The viewpoint of stakeholders can help identify factors related to

the performance of an intervention, such as who participated and why, and the influence of

the local context.

� Coverage - Coverage refers to the extent population groups are included in or excluded from an

intervention, and the differential impact on these groups. Evaluation of coverage involves

determining who was supported by humanitarian action, and why. It is a particularly important

criterion for emergency response; where there is an imperative to reach major population groups

facing life-threatening risk wherever they are. Coverage is linked closely to effectiveness (discussed

above). Key elements of coverage include:

� Proportionality. Evaluations should examine whether aid has been provided proportionate

to need, and includes key questions of equity and the degree of inclusion and exclusion bias.

Inclusion bias is the extent that certain groups receive support that should not, and

exclusion bias is the extent that certain groups that should receive support do not.

� Demographical analysis. The assessment of coverage typically requires a breakdown of

demographic data (disaggregation) by geographic location and relevant socioeconomic

categories, such as gender, age, race, religion, ability, socioeconomic status, and

marginalized populations (i.e. internally displaced persons - IDPs).

� Levels of coverage. Coverage can usually be assessed on three levels: 1) International, to

determine whether and why support provided in one intervention, or response, is adequate

in comparison to another; 2) National or regional, to determine whether and why support

was provided according to need in different areas; and 3) Local or community, to determine

who received support and why.

� Cultural/political factors. Coverage is often culturally determined. What constitutes “need,”

and therefore who is assisted, often requires an analysis of socio-political and economic

factors and related power structures.

� Impact - Impact examines the positive and negative changes from an intervention, directly or

indirectly, intended or unintended. It attempts to measure how much difference we make. Whereas

effectiveness focuses on whether immediate results have been achieved according to the

intervention design, the assessment of impact expands the focus to the longer-term and wider-

reaching consequences of achieving or not achieving intended objectives. Its scope includes the

wider effects of an intervention, including the social, economic, technical, and environmental effect

on individuals, groups, communities, and institutions. Key elements of impact include:

� Attribution. A critical aspect in assessing impact is the degree to which observed changes are

due to the evaluated intervention versus some other factor. In other words, how much

credit (or blame) can the measured changes be attributed to the intervention? Two broad

approaches are used to determine attribution. Comparative approaches attempt to establish

what would have happened without a particular intervention, and theory-based methods

examine a particular case in depth to explain how an intervention could be responsible for

specific changes. Both these approaches may involve the use of qualitative and quantitative

methods and tools, and are often used in combination. What is most important is that the

approach and method fits the specific circumstances of an impact assessment – its purpose,

the nature of the intervention being assessed, questions, indicators, level of existing

knowledge, and resources available.

� Methodological constraints. The measurement of impact has considerable methodological

constraints and is widely debated. Of the evaluation criteria, it is typically the most difficult

and costly to measure, due to the level of sophistication needed. As its focuses on longer-

term changes, it may take months or years for such changes to become apparent. Thus, a

Page 17: CARE Ddb ME Training Toolkit_Final

Page 17 of 46

comprehensive assessment of impact is not always possible or practical for an evaluation.

This is especially true for evaluations carried out during or immediately after an

intervention. The reliable and credible assessment of impact may require a longitudinal

approach and a level of resources and specialized skills that is not feasible.

� Coherence- Coherence refers to policy coherence, ensuring that relevant policies (i.e. humanitarian,

security, trade, military, and development) are consistent, and take adequate account of

humanitarian and human-rights considerations. While it is closely related to coordination, coherence

focuses on the extent to which policies of different concerned actors in the intervention context

were complementary or contradictory, whereas coordination focuses more on operational issues.

Key considerations in the assessment of coherence include:

� Multiple actors. Evaluating coherence is of particular importance when there are multiple

actors involved in an intervention with conflicting mandates and interests, such as military

and civilian actors in a conflict setting, or multiple agencies during an emergency response to

a disaster.

� Political repercussions. The assessment and reporting of coherence can have political

consequences, given its focus on wider policy issues. Therefore, careful consideration should

be given to the objective credibility in measurement, and the manner in which findings are

reported.

� Methodologically challenging. Similar to impact, coherence is measured in relation to higher

level, longer-term objectives, and can be difficult for the evaluator/s, depending on their

capacity and resources to conduct policy analysis.

� Sustainability and connectedness - Sustainability is concerned whether the benefits of an

intervention are likely to continue once donor input has been withdrawn. It includes environmental,

institutional, and financial sustainability. It is especially appropriate for longer-term interventions

that seek to build local capacity and ownership so management can continue without donor funding,

i.e. livelihoods programmes. However, with interventions that respond to complex emergencies or

natural disasters, acute and immediate needs take precedence over longer-term objectives. Thus,

connectedness has been adapted from sustainability for these situations. Connectedness refers to

the need to ensure that activities of a short-term emergency are implemented in a way that takes

longer-term and interconnected factors into account. It focuses on intermediate objectives that

assist longer-term objectives, such as the establishment of key linkages between the relief and

recovery (i.e. a sound exit strategy handing over responsibilities to appropriate stakeholders,

allocating adequate resources for post-response, etc.)

Note: The evaluation criteria mentioned here is quite detailed and as such it is not necessary to expound

on all these. Where the participants may have basic M&E understanding, it is important to only highlight

the key aspects of each criterion.

Page 18: CARE Ddb ME Training Toolkit_Final

Page 18 of 46

Complementary roles of monitoring and evaluation – in contrasting these two definitions (i.e. monitoring

and evaluation), it is evident that they are distinct yet complementary. Monitoring gives info on where a

program / project is at any given time (and over time) relative to respective targets and outcomes – it is

descriptive in intent. Evaluation gives evidence of why targets and outcomes are not being achieved – it

seeks to address issues of causality (i.e. cause-effect relationship). Evaluation is a complement to

monitoring, in that when a monitoring system sends signals that efforts are going off track, then good

evaluative info can help clarify the realities and trends noted with the monitoring system.

Complementary roles of monitoring and evaluation

Monitoring Evaluation

Clarifies program objectives Analyzes why intended results were or were not

achieved

Links activities & their resources to objectives Assesses specific causal contributions of activities to

results

Translates objectives into performance indicators

and sets targets

Examines implementation process

Routinely collects data on indicators, compares

actual results with targets

Explores unintended results

Reports progress to managers and alerts them to

problems

Provides lessons, highlights significant accomplishments

or program/project potential, and offers

recommendations for improvement

Source: 10 Steps to Results-based Monitoring & Evaluation by J.Z Kusek and R.C. Rist

Necessities for successful M&E– for successful M&E, it is important that:

� M&E must have strong ownership & support from leaders

� M&E requires expert support

� M&E needs broad stakeholder consultation in defining and setting target indicators

� M&E training is essential for success

� M&E systems have to be user friendly

M&E: The Real Value

• M&E true benefit comes when information is used at all levels & all stages resulting in progress

• M & E is used for measuring the quantity, quality. It looks at input, process and outputs, and the

outcomes.

• Impact of the programme can then be assessed

• Information collected through M&E process -> to identify and understand ‘why’, ‘where’, ‘what’->

need and gaps of performance, assess progress and shortfalls

• M&E could be carried out at project, program or sector level,

• Also at national, sub-national or local level

Institutionalizing M&E

� M & E not only to monitor progress for donor funded projects, instead should be part of regular

process of trying to improve overall performance

� Need to ensure that M & E system in place is sustainable, and information collected is relevant,

timely, used in all aspects & all levels of operation; policy formulation/ revision, planning, for better

transparency and accountability, and management including resource allocation

Page 19: CARE Ddb ME Training Toolkit_Final

Page 19 of 46

� Effectiveness of Institutionalization of M & E depends on how organizations internalize- internal

capacity to collect & use information

� Use of information depends on - analysis and evaluation of information to identify key issues relating

to progress & performance (evaluation) and determining further info need (continuous process)

� Current practice of M&E on techno-centric approach and on information collection - little effort on

identifying “what is the emerging, trend, the gaps & messages it conveys

� Results Based Monitoring and Evaluation

Results-based M&E is a paradigm shift. It differs from implementation-focused M&E in that it moves beyond

an emphasis on inputs and outputs to a greater focus on outcomes and impacts. It involves the regular

collection of information on how effectively an organization is performing. Results-based monitoring

demonstrates whether a project, program, or policy is achieving its stated goals.

Results-based monitoring requires attention to the causal logic or the Theory of Change. It seeks to answer

the following questions:

� What is the “logic” of the overall project, program or policy design?

� How do each of the components of the program help to establish an If-Then relation

� Is there a theory behind the change expected or seen? In other words does the change follow the

logic proposed?

� Does this theory or logic hold during implementation?

What is the power of measuring results? The power of measuring results

� If you do not measure results, you cannot tell success from failure

� If you cannot see success, you cannot reward it

� If you cannot reward success, you are probably rewarding failure

� If you cannot see success, you cannot learn from it

� If you cannot recognize failure, you cannot correct it

� If you can demonstrate results, you can win public support

Page 20: CARE Ddb ME Training Toolkit_Final

Page 20 of 46

Topic 3: The M&E System

The aim of this topic is to enhance the knowledge and skills of trainees in developing and using monitoring &

evaluation system. The following modules will be covered:

� The project cycle management – integrating M&E in project cycle management (PCM)

The aim of introducing project cycle management is to allow participants to understand the importance of

M&E in relation to project cycle management (PCM) and how project design influences M&E. Contents of

the training include:

• Definitions of project and project cycle management

- The Project Management Institute defines a project as “a temporary endeavour undertaken to create

a unique product or service.

- Temporary means that every project has a definite end. Unique means that the product or service is

different in some distinguishing way from all similar products or services.”

- Projects differ in size, scope cost and time, but all have the following characteristics:

� A start and a finish

� A life cycle involving a series of phases in between the beginning and end

� A budget

� A set of activities which are sequential, unique and non repetitive

� Use of resources which may require coordinating

� Centralized responsibilities for management and implementation

� Defined roles and relationships for participants in the project

- The way in which projects are planned and carried out follows a sequence beginning with an agreed

strategy, which leads to an idea for a specific action, oriented towards achieving a set of objectives,

which then is formulated, implemented, and evaluated with a view to improving the strategy and

further action.

- Project Cycle Management is an approach to managing projects. It determines particular phases of

the Project, and outlines specific actions and approaches to be taken within these phases. The PCM

approach provides for planning and review processes throughout a cycle, and allows for multiple

project cycles to be supported.

- The project cycle also provides a structure to ensure that stakeholders are consulted and relevant

information is available throughout the life of the project, so that informed decisions can be made at

key stages in the life of a project.

- Key elements of PCM:

o Key decisions, information requirements and responsibilities are defined at each phase.

o The phases in the cycle are progressive – each phase needs to be completed for the next to

be tackled with success.

o New programming draws on evaluation to build experience as part of the institutional

learning process.

- What are the stages in Project Cycle?

o Project identification

o Project planning

o Project Design

o Implementation

o Monitoring

o Evaluation

Note: M&E should be an integral part of project design as well as project implementation and completion

Page 21: CARE Ddb ME Training Toolkit_Final

Page 21 of 46

- Why project cycle management?

o Results-oriented – not activity driven

o Consistency

o Logically sets objectives and actions

o Participatory stakeholder involvement

o Transparency

o Shows whether objectives have been achieved: Indicators (for M&E)

o Framework for assessing relevance, feasibility and sustainability

o Describes external factors that influence the project’s success: assumptions and risks

- M&E in the project life cycle - Project planning sets the crucial foundation for project M&E, and

these can significantly affect the success or failure of an M&E process. Unintentionally, M&E is often

set up to fail during the initial project design.

o During project implementation, the effectiveness of M&E will be greatly influenced by the

attitude and commitment of local people and partners involved in the project and how they

relate and communicate with each other.

o When project lacks logic in its strategy or has unrealistic objectives, making good M&E

almost impossible. This is because the evaluation questions and indicators often become

quite meaningless and will not produce useful information. Furthermore if you don’t know

clearly where you are heading then you will not know how best to use any information that

might be produced.

o When the design team does not allocate enough resources to the M&E system. Critical

resources include: funding for information management, participatory monitoring activities,

field visits, etc time for a start-up phase that is long enough to establish the M&E and

monitor and reflect, and expertise, such as a consultant to support M&E development. .

o The more rigid a project design is, the more difficult the project team will have in adjusting it

as a result of change in the context and understanding of interim impacts.

o The Log Frame Approach - A methodology for planning, managing and evaluating

programmes and projects, using tools to enhance participation and transparency and to

improve orientation towards objectives. The logical framework approach follows a

hierarchical results oriented planning structure and methodology which focuses all project

planning elements on the achievement of one project purpose.

Page 22: CARE Ddb ME Training Toolkit_Final

Page 22 of 46

Source: Chaplowe, Scott G. 2008. Monitoring and Evaluation Planning: Guidelines and Tools American Red Cross/CRS

M&E Module Series.

� Introduction to M&E Systems – definition (i.e. what is a system), importance of an M&E

system, key steps in setting up an M&E system, and key components of a functional M&E system.

The topics to cover include:

• A system is a group of things that connect and form some kind of coherent whole or a system is

a set of components that form a 'whole'. Examples of systems include: respiratory system, blood

circulation system, finance systems etc.

• An M&E system therefore is a set of components that form a ‘whole’ in the entire M&E process.

The M&E system provides the information needed to assess and guide the project strategy,

ensure effective operations, meet internal and external reporting requirements, and inform

future programming. M&E should be an integral part of project design as well as project

implementation and completion.

• Questions to answer when developing M&E systems are:

o What does the project want to change and how?

o What are the specific objectives to achieve this change?

o What are the indicators and how will they measure this?

o How will the data be collected and analyzed?

• Objectives/importance of M&E systems:

o Measure progress - the M&E system aids in thinking about and clarifying goals & objectives.

o Improve accountability and management of resources

o Efficiently and effectively use data

o Improve coordination with partners

o Collect complete and timely information on project efforts

o A functioning M&E system provides a continuous flow of information that is useful both

internally and externally.

o Good M&E systems are also a source of knowledge capital.

o M&E systems can also aid in promoting greater transparency and accountability within

organizations

Page 23: CARE Ddb ME Training Toolkit_Final

Page 23 of 46

• Key Steps in setting up an M&E System

o Establish the purpose and scope – why do we need M&E and how comprehensive

should our M&E system be?

o Identify performance questions, information needs and indicators – what do we need

to know to monitor and evaluate the project in order to manage it well?

o Planning information gathering and organization – how will the required information

be gathered and organized?

o Planning critical reflection processes and events – how will we make sense of the

information gathered and use it to make improvements?

o Planning for quality communication and improvement – how and to whom do we want

to communicate what in terms of our project activities and processes?

o Planning for the necessary conditions and capacities – what is needed to ensure our

M&E system actually works?

• Key Components of an M&E System

o Program description – gives the purpose & scope of the M&E system, program goal &

objectives etc.

o Frameworks – these are structures upon which the M&E system are built upon

o Detailed description of the planned indicators

o Data collection & management plan

o Plan for monitoring

o Plan for evaluation

o Plan for the utilization of the information gained

o Mechanism for updating the plan

• M&E Frameworks - Frameworks are key elements of M&E systems that show the components

of a project and the sequence of steps needed to achieve the desired outcomes. They help

increase understanding of the program’s goals and objectives, define the relationships between

factors key to implementation, and outline the internal and external elements that could affect

its success. They are crucial for understanding and analyzing how a program is supposed to

work.

• Importance of M&E frameworks - M&E frameworks are important because they:

o Assist in understanding and analyzing a programme

o Help to develop sound M&E plans and implementation of M&E activities

o Show programme goals and measurable short, medium and long-term objectives

o Define relationships among inputs, activities, outputs, outcomes and impacts

o Clarify the relationship between programme activities and external factors.

o Demonstrate how activities will lead to desired outcomes and impacts, especially when

resources are not available to conduct rigorous impact evaluations. They often display

relationships graphically.

Note: There is no one perfect framework and no single framework is appropriate for all situations,

but there are three common types, namely:

a. Conceptual Frameworks

b. Results Frameworks

c. Logical Frameworks

Page 24: CARE Ddb ME Training Toolkit_Final

Page 24 of 46

Conceptual frameworks - A conceptual framework, sometimes called a “research framework,” is useful for

identifying and illustrating the factors and relationships that influence the outcome of a program or

intervention. They are typically shown as diagrams illustrating causal linkages between the key components

of a program and the outcomes of interest

• Results Framework - Are sometimes called “strategic frameworks.” They show the direct causal

relationships between the incremental results of the key activities all the way up to the overall

objective and goal of the intervention. This clarifies the points in an intervention at which results

can be monitored and evaluated.

Results frameworks include an overall goal, a strategic objective (SO) and intermediate results

(IRs).

o A Strategic Objective (SO) - is an outcome that is the most ambitious result that can be

achieved and for which the organization is willing to be held responsible.

o An Intermediate Result (IR) is a discrete result or outcome that is necessary to achieve

an SO.

Source: Frankel, N. & Gage, A. (Jan 2007) M&E Fundamentals: A self-guided mini course. USAID/Measure Evaluation

• Logical Framework - Is derived from the Logic model, which provides a streamlined linear

interpretation of a project’s planned use of resources and its desired ends.

• The Logic Model has five essential components:

o inputs – the resources invested in a program, for example, technical assistance,

computers, condoms or training;

o processes/activities – the activities carried out to achieve the program’s objectives;

o outputs – the immediate results achieved at the program level through the execution of

activities;

Page 25: CARE Ddb ME Training Toolkit_Final

Page 25 of 46

o outcomes – the set of short-term or intermediate results at the population level

achieved by the program through the execution of activities; and

o impacts – the long-term effects, or end results, of the program, for example, changes in

health status

Source: Michael Ochieng

• The arrows in the above figure shows the different levels of the logical thinking –when planning

one commences from the higher level impacts while implementation is opposite beginning from

the activities moving to the higher level impact.

• Group Activity on Logic Models

• The logic models, when expanded to have indicators are thus referred to as the logical

frameworks. Basically, the Logical Framework (logframe) is a matrix that specifies what the

project is intended to achieve (objectives) and how this achievement will be measured

(indicators).Elements ofa typical Logical Framework include:

o Impact, outcome and output.

o Indicators

o Baseline, milestones and targets

o Data sources often referred to as means of verification

o Risks and assumptions

o Inputs (financial and human resources).

• Below is a Logical Framework Definition Table

PLANNING IMPLEMENTATION

IMPACT

OUTCOME

ACTIVITIES

OUTPUTS

Logic Model

Page 26: CARE Ddb ME Training Toolkit_Final

Page 26 of 46

Project objectives Indicators Means of verification Assumptions

Goal – A simple clear statement of

the impact or results to achieve by

the project

Impact indicator –

quantitative or qualitative

means to measure

achievement or to reflect

the changes connected to

stated goal.

Measurement method,

data sources, and data

collection frequency for

stated indicator.

External factors necessary

to the long-term impact,

but beyond the control of

the project.

Outcomes – set of beneficiary and

population-level changes needed to

achieve the goal (usually

knowledge, attitudes and practices,

or KAP)

Outcome Indicator –

quantitative or qualitative

means to measure

achievement or to reflect

the changes connected to

stated outcomes

Measurement method,

data sources, and data

collection frequency for

stated indicator.

External conditions

necessary if the outcomes

are to contribute to

achieving the goal

Outputs – Products or services

needed to achieve the outcomes

Output Indicator –

quantitative or qualitative

means to measure

completion of stated

outputs (measures the

immediate product of an

activity)

Measurement method,

data sources, and data

collection frequency for

stated indicator.

Factors out of the project’s

control that could restrict

or prevent the outputs

from achieving the

outcomes

Activities – regular efforts needed

to produce the outputs

Process Indicator –

quantitative or qualitative

means to measure

completion of stated

activities, i.e., attendance at

the activities

Measurement method,

data sources, and data

collection frequency for

stated indicator.

Factors out of the project’s

control that could restrict

or prevent the activities

from achieving the

outcomes

Inputs – resources used to

implement activities (financial,

materials, human)

Input Indicator –

quantitative or qualitative

means to measure

utilization of stated inputs

(resources used for

activities)

Measurement method,

data sources, and data

collection frequency for

stated indicator.

Factors out of the project’s

control that could restrict

or prevent access to the

inputs.

Source: Chaplowe, Scott G. 2008. Monitoring and Evaluation Planning: Guidelines and Tools American Red Cross/CRS

M&E Module Series.

• Key Differences between the 3 Frameworks

The key differences between the 3 frameworks can be summarized below:

Type of framework and brief

description

Program Management Basis for Monitoring and Evaluation?

Conceptual – interaction of various

factors

Determines which factors the program

will influence

No. Can help explain results

Results – logically linked program

objectives

Shows the causal relationship between

program objectives

Yes – at the objective level

Logic Model – logically linked

inputs, processes/activities, outputs

and outcomes

Shows the causal relationship between

inputs and objectives

Yes – at all stages of the program

from inputs to processes to outputs

to outcomes/objectives

Source: Frankel, N. & Gage, A. (Jan 2007) M&E Fundamentals: A self-guided mini course. USAID/Measure Evaluation

� M&E Frameworks – developing Log Frames (Group Work activity)

Page 27: CARE Ddb ME Training Toolkit_Final

Page 27 of 46

Topic 4: Standards and Indicators

The aim of this topic is to enhance knowledge and skills of trainees in setting standards and indicators as part

of the M&E system. The following are the learning outcomes:

• What are Indicators

• Rationale for Indicators

• Purpose of Indicators

• Classification of indicators

• Characteristics of indicators

• Criteria for selecting indicators

• Selecting indicators and defining indicators for an M&E plan

� Introduction to indicators

An indicator is a variable used to measure progress. It is also defined as a quantitative or qualitative variable

that allows the verification of changes produced by a development intervention relative to what was

planned (UNDG Harmonized Terminology, 2003). Indicators can also be defined as Markers that help to

measure change by showing progress towards meeting objectives. Indicators differ from objectives in that

they address specific criteria that will be used to judge the success of the project or program.

In other words, an indicator is a means of measuring what actually happens against what has been planned

in terms of quantity, quality and timeliness, for every level of result.

Defining indicators – attribution: An important consideration in defining your indicators is that of attribution.

This refers to the extent to which the change you are measuring is directly attributable, or the result of, your

project activities. In many instances change is the result of a range of different contributing factors, so it is

important to be realistic and specific about what change you are measuring as the consequence of your

project. The more specific the indicator, the better.

What is the rationale for indicators? – Indicators are important because:

• The changes that have been achieved by project’s activities need to be measured.

• They enable one to assess the degree to which project inputs, activities, outputs, effects and impact

have been achieved.

• Indicators ‘indicate’ that change is happening or not happening.

• Clarify the scale and scope of a result in the results framework

• Demonstrate progress when things go right

• Provide early warning when things go wrong

• Assist in identifying changes that need to be made in strategy and practice

• Inform decision making

• Facilitate effective evaluation

� Classification of indicators

Indicators are classified broadly into three categories: Qualitative, Quantitative and Proxy Indicators.

• Quantitative Indicators - are measures of quantity such as the number of men and women indecision-

making positions, percentage of boys and girls attending primary school or the level of income per year

by sex as compared to a baseline level.

• Qualitative indicators - reflect people's judgments, opinions, perceptions, feelings, and attitudes of a

given situation or subject. They can include changes in sensitivity; changes in behaviour, changes in

quality of life, satisfaction; influence; relevance; awareness; understanding; attitudes; quality; the

perception of usefulness; the application of information or knowledge; the degree of openness; the

quality of participation; the nature of dialogue; or the sense of well-being.

Page 28: CARE Ddb ME Training Toolkit_Final

Page 28 of 46

• Proxy Indicators - These are “indirect” indicators used when it is difficult to directly measure a result, or

change. We then determine an indicator that is symbolic, or approximate of the change we are

measuring. E.g, if we find difficulty in directly measuring improvements in household income, we may

determine an indicator that measures increased purchasing of a necessary household item, or increased

savings. Proxy indicators rely on cause and effect assumptions – so be clear about these.

� Types of indicators

There are five general types of indicators which correspond to the five project hierarchy levels. These are:

input indicators, process/activity indicators, output indicators, outcome indicators and impact indicators.

• Input Indicators – these describe what goes into the project e.g. Materials / equipments; the amount of

Money; the amount of staff time

• Activity (process) Indicators – these documents the number of activities or their percentage completion

e.g. number of trainings conducted; number of events marked; number of hygiene promotion

campaigns.

• Output Indicators – these describe the good and services produced by project activities e.g. the number

of latrines constructed; number of deep wells constructed; number of women who completed the

education literacy program; number of community workers trained.

• Outcome Indicators – these describe the changes in systems or behaviours resulting from the

achievement of an intermediate goal e.g. the number of beneficiaries practicing hand washing after 6

months (behaviour); the number of water committees practicing participatory good governance

(systemic).

• Impact Indicators – these measure actual changes in conditions of the basic problem identified, including

changes in livelihood status, health, wealth, discrimination, inequity etc. For example, water-borne

disease prevalence; changes in women’s social position (qualitative); level of vulnerability.

� Ideal characteristics of indicators

Good indicators should posses the following characteristics; they should be:

• Measurable - they should be based on available data

• Technically feasible -

• Reliable - Conclusions based on them should be the same if measured by different people at different

times under different circumstances

• Valid - They should actually measure what they are supposed to measure

• Relevant - they should apply to final and intermediate goals

• Sensitive - they should be sensitive to changes in the situation being observed

• Cost effective - the results should be worth the time and money it costs to apply them

• Timely - it should be possible to collect the data reasonably quickly

Note: the steps for working with the indicator should be capable of being carried out with the target

community and other stakeholders in a participatory manner (data collection, analysis, and use)

� SMART Indicators

This is an acrostic, which means that good indicators should be:

� Specific - In terms of quantity, quality, time, location, target groups, baseline and target for the indicator

� Measurable – questions to ask include:

� Will the indicator show desirable change?

� Is it a reliable and clear measure of results?

� Is it sensitive to changes in policies &programmes?

� Do stakeholders agree on exactly what to measure?

� Achievable - Are the result (s) realistic and based on risk assessment, partnership strategy and other

factors contributing to the underlying result

� Realistic – questions to ask include:

Page 29: CARE Ddb ME Training Toolkit_Final

Page 29 of 46

� Is it relevant to the intended result?

� Does it reflect the expectations and success criteria for change in the target groups?

� Time Bound/Trackable – questions to ask include:

� Are data actually available at reasonable cost & effort?

� Can proxy indicators be used?

� Are data sources known?

Note: Indicators should be in compliance with international norms and be easily understandable by all

stakeholders. Also, choosing proper indicators of change is crucial to setting up effective monitoring and

evaluation system

� Targets, Baseline and Milestones (Benchmarks):

� A Target - A target is what we set to achieve (value)

It is an explicit statement or result derived for an indicator over any specified time period (to be

provided at the level of output, outcomes and impact).

� A Baseline - A baseline is the situation just before, or at the outset of a new program, project, service or

operation against which progress can be measure or comparisons can be made as part of monitoring and

evaluation.

Effective monitoring is nearly impossible without an established baseline.

� Milestones –are expected values or levels of achievement at specified periods of time.

� Indicator Traps

These include:

� Indicator overload - Indicators do not need to capture everything in a project, but only what is necessary

and sufficient for monitoring and evaluation.

� Output fixation - Counting myriad activities or outputs is useful for project management but does not

show the project’s impact. For measuring project effects, it is preferable to select a few key output

indicators and focus on outcome and impact indicators whenever possible.

� Indicator imprecision - Indicators need to be specific so that they can be readily measured. For example,

it is better to ask how many children under age 5 slept under an insecticide-treated bednet the previous

night than to inquire generally whether the household practices protective measures against malaria.

� Excessive complexity - Complex information can be time-consuming, expensive, and difficult for local

staff to understand, summarize, analyze, and work with. Keep it simple, clear, and concise.

� Indicator Matrix (Indicator Definition Table)2

An indicator matrix is a critical tool for planning and managing data collection, analysis, and use. It expands

the logframe to identify key information requirements for each indicator and summarizes the key M&E tasks

for the project. While the names and formats of the indicator matrix may vary, (e.g., M&E plan, indicator

planning matrix, or data collection plan), the overall function remains the same. Often, the project donor will

have a required format.

The following are the major components (column headings) of the indicator matrix3:

Indicators: The indicators provide clear statements of the precise information needed to assess whether

proposed changes have occurred. Indicators can be either quantitative (numeric) or qualitative (descriptive

observations). Typically the indicators in an indicator matrix are taken directly from the logframe.

2 Excerpted from Chaplowe, Scott G. 2008. Monitoring and Evaluation Planning: Guidelines and Tools American Red

Cross/CRS M&E Module Series. 3 Excerpted from Chaplowe, Scott G. 2008. Monitoring and Evaluation Planning: Guidelines and Tools American Red

Cross/CRS M&E Module Series.

Page 30: CARE Ddb ME Training Toolkit_Final

Page 30 of 46

Indicator Definitions: Each indicator needs a detailed definition of its key terms, including an explanation of

specific aspects that will be measured (such as who, what, and where the indicator applies). The definition

should explain precisely how the indicator will be calculated, such as the numerator and denominator of a

percent measure. This column should also note if the indicator is to be disaggregated by sex, age, ethnicity,

or some other variable.

Methods/Sources: This column identifies sources of information and data collection methods or tools, such

as use of secondary data, regular monitoring or periodic evaluation, baseline or endline surveys, PRA, and

focus group discussions. This column should also indicate whether data collection tools (questionnaires,

checklists) are pre-existing or will need to be developed. Note that the logframe column on “Means of

Verification” may list a source or method, i.e., “household survey,” the M&E plan requires much more detail,

since the M&E work will be based on the specific methods noted.

Frequency/Schedules: This column states how often the data for each indicator will be collected, such as

monthly, quarterly, or annually. It is often useful to list the data collection timing or schedule, such as start-

up and end dates for collection or deadlines for tool development. When planning for data collection timing,

it is important to consider factors such as seasonal variations, school schedules, holidays, and religious

observances (i.e., Ramadan).

Person(s) Responsible: This column lists the people responsible and accountable for the data collection and

analysis, i.e., community volunteers, field staff, project managers, local partner/s, and external consultants.

In addition to specific people’s names, use the position title to ensure clarity in case of personnel changes.

This column is useful in assessing and planning for capacity building for the M&E system.

Data Analysis: This column describes the process for compiling and analyzing the data to gauge whether the

indicator has been met or not. For example, survey data usually require statistical analysis, while qualitative

data may be reviewed by research staff or community members.

Information Use: This column identifies the intended audience and use of the information. For example, the

findings could be used for monitoring project implementation, evaluating the interventions, planning future

project work, or reporting to policy makers or donors. This column should also state ways that the findings

will be formatted (e.g., tables, graphs, maps, histograms, and narrative reports) and disseminated (e.g.,

Internet Web sites, briefings, community meetings, listservs, and mass media).

The indicator matrix can be adapted to information requirements for project management. For example,

separate columns can be created to identify data sources, collection methods and tools, information use and

audience, or person(s) responsible for data collection and analysis. It may also be preferable to use separate

matrices for M&E indicators.

It is critical that the indicator matrix be developed with the participation of those who will be using it.

Completing the matrix requires detailed knowledge of the project and context provided by the local project

team and partners. Their involvement contributes to data quality because it reinforces their understanding

of what data they are to collect and how they will collect them.

Example of an indicator matrix is shown below:

Page 31: CARE Ddb ME Training Toolkit_Final

Page 31 of 46

Indicators Indicator Definition Methods/Sources Person/s

Responsible

Frequency/

Schedules

Data Analysis Information Use

Example

Outcome 1a.

Percent of primary-

age children who

complete primary

education

(graduation rates)

1. Primary-age

children referto age

between 6 years to

15 years.

2. Completion of

primary school is at

Class/Standard 8 in

the Kenyan

Education System

3. Numerator:

number of primary

school pupils who

sit for Kenya

Certificate of

primary Education

(KCPE) in the camp-

based schools in

one year

Denominator: Total

number of children

in the community

per defined age

category who are

enrolled in

Class/Standard 1.

1. Endline

randomized

household survey

2. Community

focus group

discussions

3. Community key

informant

interviews

External

Evaluation

Team

1.Endline survey

depends on the

project timeline

2. School

FocusGroup

Discussions

(FGDs): teachers,

students, and

administration

at the end of the

project

3. Beginning of

data collection

according to the

project timeline

4. Endline survey

questionnaire

pending

depends on the

project timeline

1. Project

management

team during

project reflection

meeting

2. Post-project

meeting with

implementing

partners in

Dadaabfacilitated

by project

manager

1. Project

implementation

and decision

making with

community

2. Monitoring

process of

project with

CARE

management&

donors

3.

Impactevaluation

to justify

intervention to

Ministry of

Education and

donors

Source: Chaplowe, Scott G. 2008. Monitoring and Evaluation Planning: Guidelines and Tools American Red Cross/CRS

M&E Module Series.

� Indicator Tracking Table

This is a document to help both the implementers and program managers to keep track of the progress of

the indicators. An example is shown below:

Indicator Target Jan Feb Mar Q1

Totals

Apr May June Q2

Totals

1.0

1.1

1.2

1.3

1.4

1.5

Source: Michael Ochieng

Page 32: CARE Ddb ME Training Toolkit_Final

Page 32 of 46

Topic 5: Developing M&E tools and Data Management

The aim of this topic is to equip the trainees with knowledge and skills for developing M&E tools and data

management for their use in the WASH project. The following modules will be covered:

� Introduction to data management

This is aimed at enabling the participants to understand:

� The role of data in decision making during project implementation;

� The determinants of data use;

� The importance of reporting on time and correctly; and

� The importance of information sharing and feedback

Significant human and financial resources have been invested worldwide in the collection of population,

facility, and community-based data. However, this information often is not used by key stakeholders to

effectively inform policy and programmatic decision making. As a result, many health programs fail to fully

link evidence to decisions and suffer from a decreased ability to respond to the priority needs of the

populations they serve.

Purpose: Many possible factors undermine evidence-based decision making. Some relate to how

information flows to decision makers, and how they make their decisions; others to the context in which

information is collected and decisions are made; and yet others to the organizational infrastructure and

technical capacity of those that generate and use data.

Remember: “without information, things are done arbitrarily and one becomes unsure of whether a policy

or program will fail or succeed. If we allow our policies to be guided by empirical facts and data, there will be

a noticeable change in the impact of what we do.

Ask participants to state the two (2) main challenges they face in relation to data in their work.

Remember to emphasize that: Better Data = Better Decisions

The question then to ask is: Why do we need better data? Better quality data leads to more informed

decisions which in turn lead to better program performance which ultimately leads to lives saved as depicted

below.

What is data? – Simply stated, data is Raw Numbers or Facts, not yet processed, used for reasoning and

calculation. Data can be related to indicators (discussed in Topic 4 above), where it was stated that an

indicator is a variable that measures changes over time or enables the comparison between different areas.

Better Quality Data

More Informed Decisions

Better Program Performance

Lives Saved

Page 33: CARE Ddb ME Training Toolkit_Final

Page 33 of 46

Consequently, the findings in a project are as a result of the analysis of indicators as depicted below, which

shows a progression from data through to indicators and finally to findings.

What are recommendations? – These are the actions that should be taken based on the findings. This means

that the logical progression is from data → indicators → Findings → recommendaSons.

Data

Indicators

Findings

Recommendations

Data

Indicators

Findings

Page 34: CARE Ddb ME Training Toolkit_Final

Page 34 of 46

So why are collecting this data? – Let’s look at the picture below. What does it depict?

From the picture, we can deduce that:

� There should be good reasons for collecting M&E data.

� The data being collected should be known to all involved.

Why we collect data in CARE programs/projects – the reasons classified into non negotiable and negotiable

reasons.

� Non negotiable – this is when the data collection is a must. In most cases these are for Government

of Kenya data (e.g. data submitted to the Directorate of Refugee Affairs (DRA) and other

government-line ministries) and to capture donor-related indicators.

� Negotiable – is when we choose as a program to either collect or not collect the data. These may be

data for grant indicators, program management, advocacy, research etc.

Information Pyramid – this shows the hierarchical nature of the data collected, with the bulk being collected

and used at the field level. Higher up the pyramid is the donor or headquarter level.

Why are data important to program managers and monitoring officers? – Data is important to these staff

because they:

� help to ask critical questions

� open lines of communication between the various levels of implementation

� improve service delivery of projects

HQ or

Donor

Level

Programs level

e.g. CARE Country Office, Nairobi or

CARE Regional Office (ECARMU)

Field Level and other project reports

e.g. Refugee Assiistance Program, Dadaab

Page 35: CARE Ddb ME Training Toolkit_Final

Page 35 of 46

� Data quality

There are three ways (referred to as the 3 Cs) to check the quality of data. These are:

� Completeness – check that the required data is readily available for use when needed. This is done

by checking the gaps in data as entered into the data collection tools. For example, check for missing

values, check for appropriateness of entries, and check that all areas have been covered.

� Correctness – ask yourself, are the numbers possible? Consequently, check for impossible (out of

sync) data.

� Consistency – look for anything strange or unusual about the data. This may be repetition of figures,

preferential end digits such as 0 (zero) and 5, and unlikely differences. In doing this, look down and

across the data collection tool.

Remember: “Much of the material remains unprocessed, or if processed un-analyzed, or if analyzed, not

written up, or if written up, not read, or if read, not used or acted upon…(Chambers R, 1983). To avoid such

a scenario, it is important to ensure that the data is real.

How do we ensure data is real? This is best done by conducting a data audit. The objectives of doing the

audit are to:

� Validate the submitted data

� Identify systemic data collection problems.

The procedure for conducting a data audit is:

� Compare calculated data to monthly reports from source e.g. the school data against submitted

school meal program data.

� Keep track of number of errors per data element. Different types of data elements mean different

analysis. For example, If ‘gender’ calculations are incorrect, this most likely means a tabulation error,

If treatment data is incorrect, there could be a problem with the understanding of the definition

The steps for conducting a data audit are:

1. Identify the data flow

2. Collect completed forms

3. Trace data back to the source

Data Triangulation – this is an alternative option in data auditing. Triangulation is using different methods to

research the same issue with the same unit of analysis (i.e. an in-depth unstructured interview with each

member of a household on health care needs following a survey of household heads on the same topic).

Which methods to choose will depend on:

� Nature of the project

� Type of information which is needed

� Context of the study

� Availability of resources (time, money and personnel).

Contradictory results suggest problems with:

� Data collection

� Design

� Training

Page 36: CARE Ddb ME Training Toolkit_Final

Page 36 of 46

Week 1 Week 2 Week 3 Week 4 Week 5 Week 6 Week 7 Week 8 Week 9 Week 10 Week 11 Week 12 Week 13 Week 14

Gross Enrolment 16,027 16,352 17,909 15,773 15,819 15,874 15,950 15,950 15,950 16,341 16,376 16,615 16,615 16,615

Net Enrolment (Ave. weekly attendance) 8,060 11,817 12,392 12,657 12,892 13,066 13,367 13,473 11,047 13,644 13,802 13,873 5,697 9,841

-

2,000

4,000

6,000

8,000

10,000

12,000

14,000

16,000

18,000

20,000

Trend in weekly enrolment for term (Gross enrolment compared to net enrolment

� Data Interpretation

When the data is finally collected and analyzed, it needs to be acted upon. Interpretation answers the

question: what is happening? The bottom line is: interpret to get findings and then act (make an informed

decision).

The 3 Ts of data interpretation – these are:

� Time – does the indicator value change over time? See example below from education sector:

Source: CARE Kenya, School Meal Program report

� Target – how does the indicator value compare with target value? Targets are usually based on:

o Published standards

o Project progress plans

o Area expected rates.

Note: There could be a possibility of no target existing for an indicator.

Data

Indicators

Findings

Actions

Page 37: CARE Ddb ME Training Toolkit_Final

Page 37 of 46

� Triangulate – ask the question, Are changes in indicator value comparable with other related data?

Compare the data to other similar settings and related indicators from either the project or other

similar projects.

In summary, we can state that data for M&E is collected, processed, analyzed and decisions made for

corrective action as summarized below.

� Data Collection and Data Collection Tools

M&E is about collecting data which can then be used to assess/track progress of a project towards the

desired goal (monitoring)and determine whether a project accomplished what it was intended to achieve

(Evaluation).

As seen above, good data is key for evidence-based decision making and is increasingly being used by donors

practicing Result-Based Funding (e.g. the global fund, PEPFAR).

Major Sources of Data for Project Planning, M&E include:

� Secondary data. Useful information can be obtained from other research, such as surveys and other

studies previously conducted or planned at a time consistent with the project’s M&E needs, in-depth

assessments, and project reports. Secondary data sources include government planning

departments, university or research centers, international agencies, other projects/programs

working in the area, and financial institutions.

� Sample surveys. A survey based on a random sample taken from the beneficiaries or target audience

of the project is usually the best source of data on project outcomes and effects. Although surveys

are laborious and costly, they provide more objective data than qualitative methods. Many donors

expect baseline and endline surveys to be done if the project is large and alternative data are

unavailable.

� Project output data. Most projects collect data on their various activities, such as number of people

served and number of items distributed.

� Qualitative studies. Qualitative methods that are widely used in project design and assessment are:

participatory rapid appraisal, mapping, Participatory Rural Appraisal, key informant interviews, focus

group discussions, and observation.

� Checklists. This is a systematic review of specific project components can be useful in setting

benchmark standards and establishing periodic measures of improvement.

Taking corrective

action

Collecting

data Analysis Taking

decisions

Processing

data

1 2 3 4

5

The

project

Page 38: CARE Ddb ME Training Toolkit_Final

Page 38 of 46

� External assessments/Evaluations. Project implementers as well as donors often hire outside

experts to review or evaluate project outputs and outcomes. Such assessments may be biased by

brief exposure to the project and over-reliance on key informants. Nevertheless, this process is less

costly and faster than conducting a representative sample survey and it can provide additional

insight, technical expertise, and a degree of objectivity that is more credible to stakeholders.

� Participatory assessments. The use of beneficiaries in project review or evaluation can be

empowering, building local ownership, capacity, and project sustainability. However, such

assessments can be biased by local politics or dominated by the more powerful voices in the

community. Also, training and managing local beneficiaries can take time, money, and expertise, and

it necessitates buy-in from stakeholders. Nevertheless, participatory assessments may be

worthwhile as people are likely to accept, internalize, and act upon findings and recommendations

that they identify themselves.

Considerations for Data Collection – the following should be considered when preparing for data collection:

� Prepare data collection guidelines. This helps to ensure standardization, consistency, and reliability

over time and among different people in the data collection process. Double-check that all the data

required for indicators are being captured through at least one data source.

� Pretest data collection tools. Pretesting helps to detect problematic questions or techniques, verify

collection time, identify potential ethical issues, and build the competence of data collectors.

� Train data collectors. Provide an overview of the data collection system, data collection techniques,

tools, ethics, and culturally appropriate interpersonal communication skills. Give trainees practical

experience collecting data.

� Address ethical concerns. Identify and respond to any concerns expressed by the target population.

Ensure that the necessary permission or authorization has been obtained, that local customs and

attire are respected, and that confidentiality and voluntary participation are maintained.

Reducing data collection costs – data collection can be a costly endeavour. How then can an organization

reduce these costs yet still maximize on the quality of data collected? One of the best ways to reduce data

collection costs is to reduce the amount of data collected (Bamberger et al. 2006). The following questions

can help simplify data collection and reduce costs:

� Is the information necessary and sufficient? Collect only what is necessary for project management

and evaluation. Limit information needs to the stated objectives, indicators, and assumptions in the

logframe.

� Are there reliable secondary data sources? This can save costs for primary data collection.

� Is the sample size adequate but not excessive? Determine the sample size that is necessary to

estimate or detect change. Consider using stratified and cluster samples.

� Can the data collection instruments be simplified? Eliminate extraneous questions from

questionnaires and checklists. In addition to saving time and cost, this has the added benefit of

reducing “survey fatigue” among respondents.

� Data Collection Tools and Techniques

There are a myriad of data collection tools and techniques but the most common include:

� Checklist: A list of items used for validating or inspecting that procedures/steps have been followed,

or the presence of examined behaviors.

� Community interviews/meeting: A form of public meeting open to all community members.

Interaction is between the participants and the interviewer, who presides over the meeting and asks

questions following a prepared interview guide.

Page 39: CARE Ddb ME Training Toolkit_Final

Page 39 of 46

� Direct observation: A record of what observers see and hear at a specified site, using a detailed

observation form. Observation may be of physical surroundings, activities, or processes. Observation

is a good technique for collecting data on behavior patterns and physical conditions.

� Focus group discussion: Focused discussion with a small group (usually 8 to 12 people) of

participants to record attitudes, perceptions, and beliefs pertinent to the issues being examined. A

moderator introduces the topic and uses a prepared interview guide to lead the discussion and elicit

discussion, opinions, and reactions.

� Key informant interview: An interview with a person having special information about a particular

topic. These interviews are generally conducted in an open-ended or semi-structured fashion.

� Laboratory testing: Precise measurement of specific objective phenomenon, for example, infant

weight or water quality test.

� Most significant change (MSC): A participatory monitoring technique based on stories about

important or significant changes, rather than indicators. They give a rich picture of the impact of

development work and provide the basis for dialogue over key objectives and the value of

development programs.

� Questionnaire: A data collection instrument containing a set of questions organized in a systematic

way, as well as a set of instructions to the enumerator/interviewer about how to ask the questions

(typically used in a survey).

� Participatory rapid (or rural) appraisal (PRA): This uses community engagement techniques to

understand community views on a particular issue. It is usually done quickly and intensively – over a

2 to 3-week period. Methods include interviews, focus groups, and community mapping.

� Survey: Systematic collection of information from a defined population, usually by means of

interviews or questionnaires administered to a sample of units in the population (e.g., person,

beneficiaries, and adults).

Page 40: CARE Ddb ME Training Toolkit_Final

Page 40 of 46

� Participatory Monitoring and Evaluation

Having a participatory approach to monitoring and evaluation, which seeks to involve all stakeholders,

where possible has the following benefits:

� Empowers beneficiaries to analyze and act on their own situation (as “active participants” rather

than “passive recipients”)

� Builds local capacity to manage, own, and sustain the project. People are likely to accept and

internalize findings and recommendations that they provide.

� Builds collaboration and consensus at different levels—between beneficiaries, local staff and

partners, and senior management

� Reinforces beneficiary accountability, preventing one perspective from dominating the M&E process

� Saves money and time in data collection compared with the cost of using project staff or hiring

outside support

� Provides timely and relevant information directly from the field for management decision making to

execute corrective actions

� Information Reporting and Utilization

Reporting is closely related to M&E work, since data are needed to support the major findings and

conclusions presented in a project report. In reporting, data about intended achievement at baseline is

compared with data on actual achievement to identify significant deviations from plan as a basis for

identification of problems and opportunities to identify corrective action.

Data about intended intended intended intended

achievements and baachievements and baachievements and baachievements and baselineselineselineseline

is compared with … Data on

actual achievementsactual achievementsactual achievementsactual achievements

to identify...

SignificantSignificantSignificantSignificant deviations from plandeviations from plandeviations from plandeviations from plan

as a basis for... identification of

problems and opportunitiesproblems and opportunitiesproblems and opportunitiesproblems and opportunities

to identify... CorrectiveCorrectiveCorrectiveCorrective

actionsactionsactionsactions

Page 41: CARE Ddb ME Training Toolkit_Final

Page 41 of 46

Remember:

� use the tool adapted to the information and context

� define the method to use the tool

� triangulate most relevant data

� use the existing data

� gather causes and consequences of the identified problems

� gather opinions and suggestions to improve the quality of your project

The importance of M&E reporting is:

� Advance learning among project staff as well as the larger development community

� Improve the quality of the services provided

� Inform stakeholders on the project benefits and engage them in work that furthers project goals

� Inform donors, policy makers and technical specialists of effective interventions (and those that did

not work as hoped)

� Develop a project model that can be replicated and scaled-up.

Consequently, the purpose of M&E reports is to provide:

� updates on achievements against indicators and milestones,

� guidance on the elements that should be adjusted

Considerations for M&E reports – the following should be considered when planning for M&E reports:

� Design the M&E communication plan around the information needs of the users. The content and

format of data reports will vary, depending on whether the reports are to be used to monitor

processes, conduct strategic planning, comply with requirements, identify problems, justify a

funding request, or conduct an impact evaluation.

� Identify the frequency of data reporting needs. For example, project managers may want to review

M&E data frequently to assess project progress and make decisions, whereas donors may need data

only once or twice a year to ensure accountability.

� Tailor reporting formats to the intended audience. Reporting may entail different levels of

complexity and technical language; the report format and media should be tailored to specific

audiences and different methods used to solicit feedback.

� Identify appropriate outlets and media channels for communicating M&E data. Consider both

internal reporting, such as regular project reports to management and progress reports to donors, as

well as external reporting, such as public forums, news releases, briefings, and Internet Web sites.

Structure of M&E reports – the minimum structure of monitoring reports is:

1. Introduction

2. Monitoring of situation (external factors)

3. Monitoring of objectives and indicators / (+ critical events)

4. Progress of activities

5. Conclusions

6. Recommendations

7. Annexes

Common mistakes made in monitoring reports include:

� Going into irrelevant/useless details

� lack of analysis of the data

� not showing trends and warnings

� lack of recommendations

� no corrective actions are taken

� and reporting is perceived as a compulsory useless task.

Page 42: CARE Ddb ME Training Toolkit_Final

Page 42 of 46

Remember, if monitoring does not lead to analysis, and then to decision making (adaptations), then it is a

USELESS endeavour.

� Other forms in information sharing

The other forms, other than reports, in which information can be shared, include:

� Human Interest stories

� Anecdotes

� Lessons learned

Human Interest story - is a feature story that discusses a person or people in an emotional way. It presents

people and their problems, concerns, or achievements in a way that brings about interest, sympathy or

motivation in the reader or viewer. Or it is a type of story that is concerned with the activities of a few

identified people. It is told as the “story behind the story” and that it shows the personal story behind a

larger story affecting many people.

Anecdote - is a short personal account of an incident or event.

Lesson learned - is a clear and substantive finding on a specific issue based on data, observations, and

evaluation. It illustrates a strategy, technique, principle, process, or activity that should be followed in the

future. Lessons learned are well documented (not anecdotal) and backed up by clear qualitative and

quantitative evidence.

Page 43: CARE Ddb ME Training Toolkit_Final

Page 43 of 46

REFERENCES

CARE International (Nov 2003) CI Programme Standards Framework. Accessed at www.care.org

CARE International in Kenya Long-Range Strategic Plan (LRSP) 2013 – 2018. Unpublished

CARE International in Kenya Refugee Assistance Program (RAP) Strategy 2013 – 2018. Unpublished

CARE International (2012) Working for Poverty Reduction and Social Justice: The CARE 2020 Program

Strategy. Accessed at www.care.org

Chaplowe, Scott G. 2008. Monitoring and Evaluation Planning: Guidelines and Tools American Red Cross/CRS

M&E Module Series. American Red Cross and Catholic Relief Services (CRS), Washington, DC and Baltimore,

MD. Accessed at www.crs.org

Frankel, N. & Gage, A. (Jan 2007) M&E Fundamentals: A self-guided mini course. USAID/Measure Evaluation

Herrero, S. (April 2012) Integrated Monitoring: A Practical Manual for Organizations that want to achieve

results. InProgress. Accessed at http://www.inprogressweb.com/resources

IFRC (2011). IFRC Framework for Evaluation. Accessed at www.ifrc.org

International Fund for Agriculture Development (IFAD) Section 4: Setting Up the M&E System in Managing

for Impact in Rural Development: A guide for Project M&E. Accessed at www.ifad.org

Kuseck, J.Z. & Rist, R.C. (2004) Ten Steps to Results-Based Monitoring and Evaluation: A handbook for

Development Practitioners. NY: World Bank

UNDP (2009) Handbook on Planning, Monitoring and Evaluating for Development Results. NY: UNDP.

Accessed at http://www.undp.org/eo/handbook

Page 44: CARE Ddb ME Training Toolkit_Final

Page 44 of 46

ANNEXES

Annex I: Pre/Post-test Questions

Pre-test/Post-test questions

Circle the correct answer for the questions below. There is only one correct answer for each question. 1. Monitoring is sometimes referred to as:

a. Evaluation

b. Impact Evaluation

c. Process Evaluation

d. Performance Evaluation

2. Why do we monitor?

a. To track changes from baseline to desired outcome

b. To assess performance and progress towards the outcome

c. To alert managers to problems in performance and delivery

d. To analyze reasons why (or why not) progress is made and learn

e. All of the above

3. Evaluations measure:

a. The timeliness of a program’s activities

b. The outcomes and impact of a program’s activities

c. How closely a program kept to its budget

d. How well the program was implemented

4.At what stage of a program should monitoring take place?

a. At the beginning of the program

b. At the mid-point of the program

c. At the end of the program

d. Throughout the life of the program

5. When in the life of a project should evaluation take place?

a. Before a project starts

b. Mid-term

c. Throughout the project life

d. At the end of the project

e. All of the above

6. Which of the following is NOT considered “monitoring”?

a. Counting the number of people trained

b. Tracking the number of brochures disseminated

c. Attributing changes in health outcomes to an intervention

d. Collecting monthly data on clients served in a clinic.

7. Which of the following is not TRUE about why we monitor & evaluate programmes?

a. M& E mobilizes communities to support WASH

b. M& E shapes the decisions of funding agencies and policy makers

c. M& E can help institutionalize programmes

d. M& E results contribute to the global understanding of “what works”

e. None of the above

8. The primary difference between evaluation and monitoring is that:

a. evaluation is based on objectives and monitoring is not

b. evaluation can be carried out by persons external to the project or programme whereas monitoring can only

be done by personnel connected with the programme or project

c. evaluation can be carried out at the end of the programme or project but monitoring cannot be carried out

at the end of the programme or project.

d. decisions can be made after reviewing the results of evaluation, but decision making is not possible after

analysis of monitoring results.

9. M&E plans should include:

a. A detailed description of the indicators to be used

b. The data collection plan

c. A plan for the utilization of the information gained

d. All of the above e. a and b only

Page 45: CARE Ddb ME Training Toolkit_Final

Page 45 of 46

10. The purpose of indicators is to:

a. Demonstrate the strength of the information system

b. Serve as benchmarks for demonstrating achievements

c. Provide program accountability

d. Describe the objectives of a project

11. Frameworks can:

a. Help increase understanding of a project’s goals and objectives

b. Define the relationships between factors key to project implementation

c. Delineate the internal and external elements that could affect a project’s success

d. All of the above

e. b and c only

12. The five key components of logic models are:

a. Inputs, processes, outputs, outcomes, impacts

b. Conceptual, results, logical, logframe, logic

c. Conceptual, indicators, outputs, outcomes, impacts

d. Indicators, inputs, processes, outputs, results

13. Frameworks that diagram the direct causal relationships between the incremental results of key project activities

and the overall objective and goal of the intervention are called:

a. Conceptual frameworks

b. Results frameworks

c. Logic models

d. All of the above

14. Indicators do NOT need to be directly related to the program’s objectives.

True

False

15. Which of the following is a characteristic of a good indicator?

a. It is clearly defined in unambiguous terms.

b. It produces the same results when used repeatedly to measure the same condition or event.

c. It measures only the condition or event it is intended to measure.

d. All of the above.

16. When selecting an indicator, care must be taken to ensure that it is one that program activities can affect.

True

False

17. A data collection plan should include the following:

a. The timing and frequency of collection

b. The person or agency responsible for the collection

c. The types of information needed for the indicators

d. All of the above

18. Data should be collected whenever possible, for the reason that they could perhaps be used some day.

True

False

19. The highest quality data are usually obtained through the triangulation of data from several sources.

True

False

Page 46: CARE Ddb ME Training Toolkit_Final

Page 46 of 46

Annex II: Map of Dadaab