project m& e master document

45
PROJECT MONITORING AND EVALUATION Introduction to Monitoring and Evaluation (of Public Funded Projects) An increasing interest in public funded projects is emphasized during the last period. Monitoring and evaluation represent two of the main activities included in the project cycle. Understanding the concepts and the reason of performing them with due diligence contribute to a successful implementation of public projects as well as provide useful information for improved future public interventions. The management performance of public projects is assessed through specific indicators defined along the 5 criteria set out by the OECD – relevance, efficiency, effectiveness, impact, sustainability. Government officials, development managers and civil society are increasingly aware of the value of monitoring and evaluation (M&E) of development activities. M&E provides a better means of learning from past experience, improving service delivery, planning and allocating resources, and demonstrating results as part of accountability to key stakeholders. Yet there is often confusion about what M&E entails. Exploring Project monitoring and evaluation Although the term “monitoring and evaluation” tends to get run together as if it is only one thing, monitoring and evaluation are, in fact, two distinct sets of organizational activities, related but not identical. What does the term ‘Monitoring’ mean to a project manager? Monitoring is the systematic collection and analysis of information as a project progresses. It is aimed at improving the efficiency and effectiveness of a project or organization. It is based on targets set and activities planned during the planning phases of work. It helps to keep the work on track, and can let management know when things are going wrong. If done properly, it is an invaluable tool for good management, and it provides a useful base for evaluation. It enables you to determine whether the resources you have available are sufficient and are being well used, whether the capacity you have is sufficient and appropriate, and whether you are doing what you planned to do. Monitoring involves: Establishing indicators of efficiency, effectiveness and impact; Setting up systems to collect information relating to these indicators; Collecting and recording the information; 1 | Page compiled by Nangoli Sudi for learning purposes

Upload: nangoli

Post on 09-Mar-2015

32 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Project m& E Master Document

PROJECT MONITORING AND EVALUATION

Introduction to Monitoring and Evaluation (of Public Funded Projects)An increasing interest in public funded projects is emphasized during the last period. Monitoring and evaluation represent two of the main activities included in the project cycle. Understanding the concepts and the reason of performing them with due diligence contribute to a successful implementation of public projects as well as provide useful information for improved future public interventions. The management performance of public projects is assessed through specific indicators defined along the 5 criteria set out by the OECD – relevance, efficiency, effectiveness, impact, sustainability. Government officials, development managers and civil society are increasingly aware of the value of monitoring and evaluation (M&E) of development activities. M&E provides a better means of learning from past experience, improving service delivery, planning and allocating resources, and demonstrating results as part of accountability to key stakeholders. Yet there is often confusion about what M&E entails.

Exploring Project monitoring and evaluationAlthough the term “monitoring and evaluation” tends to get run together as if it is only one thing, monitoring and evaluation are, in fact, two distinct sets of organizational activities, related but not identical.

What does the term ‘Monitoring’ mean to a project manager?Monitoring is the systematic collection and analysis of information as a project progresses. It is aimed at improving the efficiency and effectiveness of a project or organization. It is based on targets set and activities planned during the planning phases of work. It helps to keep the work on track, and can let management know when things are going wrong. If done properly, it is an invaluable tool for good management, and it provides a useful base for evaluation. It enables you to determine whether the resources you have available are sufficient and are being well used, whether the capacity you have is sufficient and appropriate, and whether you are doing what you planned to do. Monitoring involves:

Establishing indicators of efficiency, effectiveness and impact; Setting up systems to collect information relating to these indicators; Collecting and recording the information; Analyzing the information; Using the information to inform day-to-day management.

What does the term ‘Evaluation’ mean to a project manager?Evaluation is the comparison of actual project impacts against the agreed strategic plans. It looks at what you set out to do, at what you have accomplished, and how you accomplished it. It can be formative (taking place during the life of a project with the intention of improving the strategy or way of functioning of the project). It can also be summative (drawing learnings from a completed project that is no longer functioning). Formative and summative evaluation can be compared to the difference between a check-up and an autopsy. Evaluation helps to determine the project’s merit (does it work?) and its worth (do we need it?).

Evaluation involves: Looking at what the project or organization intended to achieve – what difference did it want

to make? What impact did it want to make? Assessing its progress towards what it wanted to achieve, its impact targets. Looking at the strategy of the project or organization. Did it have a strategy? Was it effective

in following its strategy? Did the strategy work? If not, why not? Looking at how it worked. Was there an efficient use of resources? What were the

opportunity costs (see Glossary of Terms) of the way it chose to work? How sustainable is the way in which the project or organization works? What are the implications for the various stakeholders in the way the organization works.

1 | P a g e c o m p i l e d b y N a n g o l i S u d i f o r l e a r n i n g p u r p o s e s

Page 2: Project m& E Master Document

A Comparative Analysis of Monitoring and Evaluation:

MONITORING EVALUATIONAsks: “What Happened”- Accepts design as givenFocuses on:- Efficiency - Execution - Compliance with Procedures - Achievement of inputs, outputs & purpose

Feedback:- Is Continuous - Based on Activities - and Interim Achievements- Short Term HorizonReplanning: Results in:- Adjustments in implementation plan- Input to Current Programming

Asks: “Why did it happen or not happen”- Challenges Design

Focuses on:- Causality- Unplanned change- Policy Correctness- Casual Relationships amongoutputs, purpose and goalFeedback:-Important Milestones- Results- Longer-term Time Frame

Replanning: Results in:- Adjustments in Project Strategy- Relation to other projects

The following outlines additional benefits of conducting project Monitoring and Evaluations.

1) Participants – Participants are core to the success of the project. In the long run, project sustainability will depend on the degree to which participants benefit directly, short-term and long-term, from the experiences or services. The evaluation will provide evidence of the ways in which participant learning is impacted.

2) Project Improvement – Project strengths and weaknesses can be identified through an evaluation. Of equal importance, an evaluation can map out the relationships among project components – that is, how the various parts of a project work together. This information can be used to re-design the project and increase both efficiency and effectiveness.

3) Public Relations – Data generated by an evaluation can be used to promote the products and services of the project within and outside of the agency. Instead of vague claims and uncertain assertions, statements based on evaluation results will be viewed as more substantial and justifiable. Importantly, if designed with sufficient care, an evaluation should be able to shed some light on why and how the project works.

4) Funding – More and more program managers require the design and implementation of a comprehensive, outcomes-based evaluation. They want to know what types of impacts the project has had. Even if evaluation is not required, an evaluation can provide evidence of project effectiveness; such evidence may be important when limited resources are being distributed internally. Evaluation results are often helpful in determining if a project should be continued, scaled backed, discontinued, or enhanced.

5) Improved Delivery – Projects evolve over time. What was once a coherent, discrete set of activities may have grown into a jumbled set of loosely related events? An evaluation can help clarify the purposes of the project, allowing decision-makers to examine project components against well-thought-out criteria. Valid comparisons of projects and activities can be made and duplication of efforts can be limited. It is quite possible that the evaluation will uncover a gem hidden in the jumble.

2 | P a g e c o m p i l e d b y N a n g o l i S u d i f o r l e a r n i n g p u r p o s e s

Page 3: Project m& E Master Document

6) Capacity Building – Engaging staff members, volunteers, and other stakeholders in the design and implementation of an evaluation will provide opportunities for skill building and learning. As the project or program is examined, those involved will also develop insights into the workings of the project and perhaps even the workings of the organization. These insights can be used to inform a strategic examination of projects and programs by identifying priorities, overlap, gaps, and exemplars.

7) Clarifying Project Theory – When the project was designed initially, it was based either explicitly or implicitly on a project theory that explained how things work or how people learn or even how organizations change. An evaluation asks those involved to revisit that project theory. Based on experiences with the project and information taken from research literature, the evaluation provides an opportunity to revise the project theory. By making the project theory explicit, the underpinnings of the project and what makes it work will be better understood and thus, better implemented. Staff members and volunteers who understand why a particular set of teaching methods was selected or why the project activities were sequenced the way they were will be more likely to actually follow the plan. They will also feel more ownership in the project if they understand the theory behind the project more fully.

8) Taking Stock – Engaging in evaluation provides a precious opportunity to reflect on the project. It is an opportunity to document where the project has been and where it is going, and consider whether the project is doing what its designers hoped it would do. Taking stock is more than accumulating information about the project, it is learning through the project.

At what point during the Project life cycle are Monitoring and Evaluation results required? During situation analysis and identification of overall programme focus, lessons learned from

past programme implementation are studied and taken into account in the programme strategies;

During programme design, data on indicators produced during the previous programme cycle serve as baseline data for the new programme cycle. Indicator data also enable programme designers to establish clear programme targets which can be monitored and evaluated;

During programme implementation, monitoring and evaluation ensures continuous tracking of programme progress and adjustment of programme strategies to achieve better results;

At programme completion, in-depth evaluation of programme effectiveness, impact and sustainability ensures that lessons on good strategies and practices are available for designing the next programme cycle.

General steps that entail the Monitoring and Evaluation ProcessThe evaluation process normally includes the following steps:

1) Defining standards against which programmes are to be evaluated. Such standards are defined by the programme indicators;

2) Investigating the performance of the selected activities/processes/products to be evaluated based on these standards. This is done by an analysis of selected qualitative or quantitative indicators and the programme context;

3) Synthesizing the results of this analysis;4) Formulating recommendations based on the analysis of findings;5) Feeding recommendations and lessons learned back into programme and other decision-

making processes.

3 | P a g e c o m p i l e d b y N a n g o l i S u d i f o r l e a r n i n g p u r p o s e s

Page 4: Project m& E Master Document

DIFFERENT APPROACHES TO EVALUATION

There are many different ways of doing an evaluation. Some of the more common terms you may have come across may be among the ones highlighted here under;- (Note that the best evaluators use a combination of all these approaches, and that an organization can ask for a particular emphasis but should not exclude findings that make use of a different approach)

Approach Major purpose Typical focusquestions

Likely methodology

Goal-based Assessing achievement of goals and objectives.

Were the goals achieved? Efficiently? Were they the right goals?

Comparing baseline (see Glossary of Terms) and progress data (see Glossary of Terms); finding ways to measure indicators.

Decision-making

Providing information. Is the project effective? Should it continue? How might it be modified?

Assessing range of options related to the project context, inputs, process, and product. Establishing some kind of decision-making consensus.

Goal-free Assessing the full range of project effects, intended and unintended.

What are all the outcomes? What value do they have?

Independent determination of needs and standards to judge project worth. Qualitative and quantitative techniques to uncover any possible results.

Expert judgement

Use of expertise. How does an outside professional rate this project?

Critical review based on experience, informal surveying, and subjective insights.

The following are also common approaches of project/programme evauation Self-evaluation: This involves an organization or project holding up a mirror to itself

and assessing how it is doing, as a way of learning and improving practice. It takes a very self-reflective and honest organization to do this effectively, but it can be an important learning experience.

Participatory evaluation: This is a form of internal evaluation. The intention is to involve as many people with a direct stake in the work as possible. This may mean project staff and beneficiaries working together on the evaluation. If an outsider is called in, it is to act as a facilitator of the process, not an evaluator.

Rapid Participatory Appraisal: Originally used in rural areas, the same methodology can, in fact, be applied in most communities. This is a qualitative (see Glossary of Terms) way of doing evaluations. It is semi-structured and carried out by an interdisciplinary team over a short time. It is used as a starting point for understanding a local situation and is a quick, cheap, useful way to gather information. It involves the use of secondary data review, direct observation, semi-structured interviews, key informants, group interviews, games, diagrams, maps and calendars. In an evaluation context, it allows one to get valuable input from those who are supposed to be benefiting from the development work. It is flexible and interactive.

External evaluation: This is an evaluation done by a carefully chosen outsider or outsider team.

Interactive evaluation: This involves a very active interaction between an outside evaluator or evaluation team and the organization or project being evaluated. Sometimes an insider may be included in the evaluation team.

4 | P a g e c o m p i l e d b y N a n g o l i S u d i f o r l e a r n i n g p u r p o s e s

Page 5: Project m& E Master Document

ADVANTAGES AND DISADVANTAGES OF INTERNAL AND EXTERNAL EVALUATION

Evaluation Advantages DisadvantagesInternal evaluation The evaluators are very familiar with the work,

the organizational culture and the aims and objectives.Sometimes people are more willing to speak to insiders than to outsiders.An internal evaluation is very clearly a management tool, a way of self-correcting, and much less threatening than an external evaluation. This may make it easier for those involved to accept findings and criticisms.An internal evaluation will cost less than an external evaluation.

The evaluation team may have a vested interest in reaching positive conclusions about the work or organization. For this reason, other stakeholders, such as donors, may prefer an external evaluation.The team may not be specifically skilled or trained in evaluation.The evaluation will take up a considerable amount of organizational time – while it may cost less than an external evaluation, the opportunity costs (see Glossary of Terms) may be high.

External evaluation (done by a team or person with no vested interest in the project)

The evaluation is likely to be more objective as the evaluators will have some distance from the work.The evaluators should have a range of evaluation skills and experience.Sometimes people are more willing to speak to outsiders than to insiders.Using an outside evaluator gives greater credibility to findings, particularly positive findings.

Someone from outside the organization or project may not understand the culture or even what the work is trying to achieve.Those directly involved may feel threatened by outsiders and be less likely to talk openly and co-operate in the process.External evaluation can be very costly.An external evaluator may misunderstand what you want from the evaluation and not give you what you need.

What are the major levels that should be monitored?Inputs: e.g Resources, staff, funds, facilities, supplies, trainers etc.Process: e.g. Level of implementation of the activity, achievement and constraints.Outputs: e.g. Condom availability, trained staff, quality of services (e.g., STI, VCT care) knowledge of HIV transmission.Outcome: These could be behaviour change, attitude change, change in HIV/AIDS/STI prevalence, and increase in social support etc. –some times these are difficult to measure by routine methods.

Types of Indicators:Input indicators: that will track the means allocated for implementation of the activities either financial, personnel (technical assistance volunteers) facilities, equipment and supplies.Process indicators: that will track the activities in which the inputs are utilized for instance in training, in establishment of a logistic system, in planning of the service delivery.Output indicators: track the direct and immediate results of input and processes at project level, such as availability of VCT services.Outcome indicators: refer to the intermediate results at the target population level that are closely linked to the project, e.g. health impact.

INPUTS PROCESS OUTPUTS OUTCOME IMPACTResources• Staff• Funds• Facilities• Supplies• Training

Training of Peer educators

Condom availability• Trained staff•Quality of servicese.g. STI, VCT, Care• Knowledge of HIVtransmission

Behaviour change• Attitude change• Change in STI trends

HIV/AIDS trends•AIDS related mortality• Social norms• Coping capacityin community•Economic impact

5 | P a g e c o m p i l e d b y N a n g o l i S u d i f o r l e a r n i n g p u r p o s e s

Page 6: Project m& E Master Document

Evaluations must meet the following standards:

a) Evaluations must be useful and used The evaluation must serve the information needs of the intended users. This requires that the needs of all stakeholders be identified and addressed. Reports should clearly describe the operation being evaluated, including its context, and the purposes, procedures and findings of the evaluation. Findings and reports must be disseminated in a timely manner, and implementation of evaluation recommendations must be ensured by the country office.

b) Evaluations must be accurateThe evaluation must reveal and convey technically adequate information about the operation, in order to determine its worth or merit. The evaluation report must be evidence based, showing clearly how the evaluation team applied the methods and how the findings were arrived at. Findings must always be triangulated, i.e. supported by several different sources (e.g. key informant, beneficiary and direct observation).

c) Evaluations must seek to reflect the reactions of beneficiariesEvaluation planning must provide for adequate consultations with representative beneficiary groups, with attention given to including the perspectives of males, females, children, and other vulnerable groups as relevant to the operation. Evaluation teams must make use of rapid rural appraisal (RRA) methods whenever possible and should use beneficiary observation and consultation as a key element of their field visits.

d) Evaluations must not be confrontationalEvaluations are most effective if done in a constructive manner. Stakeholders should be involved early on and should be allowed to express their information needs. Evaluations must be perceived as helpful and as providing added value, with their key objective being to improve performance.

e) Evaluations must be independent and impartial (unless undertaken as selfevaluations).Evaluators should not have been involved in any stage of the operation being evaluated.The evaluation should be complete and fair in its examination and recording of the strengths and weaknesses of the operation. Differing viewpoints, if they exist, should be presented in the report.

THE FACTORS TO CONSIDER WHEN SELECTING AN EVALUATOR/EVALUATION TEAM

Qualities to look for in an evaluator or evaluation team:a) An understanding of development issues.b) An understanding of organizational issues.c) Experience in evaluating development projects, programmes or organizations.d) A good track record with previous clients.e) Research skills.f) A commitment to quality.g) A commitment to deadlines.h) Objectivity, honesty and fairness.i) Logic and the ability to operate systematically.j) Ability to communicate verbally and in writing.k) A style and approach that fits with your organization.l) Values that are compatible with those of the organization.m) Reasonable rates (fees), measured against the going rates.

6 | P a g e c o m p i l e d b y N a n g o l i S u d i f o r l e a r n i n g p u r p o s e s

Page 7: Project m& E Master Document

When your Project team decides to use an external evaluator, It must;-a) Check his/her/their references.b) Meet with the evaluators before making a final decision.c) Communicate what you want clearly – good Terms of Reference (see Glossary of Terms) are

the foundation of a good contractual relationship.d) Negotiate a contract which makes provision for what will happen if time frames and output

expectations are not met.e) Ask for a work plan with outputs and timelines.f) Maintain contact – ask for interim reports as part of the contract – either verbal or written.g) Build in formal feedback times.

The role of Project Monitoring and Evaluation in Evidence-based policy making

Evidence-based policy has been defined as an approach which “helps people make well informed decisions about policies, programmes and projects by putting the best available evidence at the heart of policy development and implementation” (Davies, 1999a). According to the MDG-guide, “Evidence-based policy making refers to a policy process that helps planners make better-informed decisions by putting the best available evidence at the centre of the policy process”. Evidence may include information produced by integrated monitoring and evaluation systems, academic research, historical experience and “good practice” information. The Evidence-based policy approach stands in contrast to opinion-based policy, which relies heavily on either the selective use of evidence (e.g. on single studies irrespective of quality) or on the untested views of individuals or groups, often inspired by ideological standpoints, prejudices, or speculative conjecture. Many governments and organizations are moving away from “opinion-based policy” towards “evidence-based policy”, and are in the stage of “evidence-influenced policy”.

The concept of ‘evidence-based policy’ has been gaining currency over the last two decades. Literature suggests that this new interest, in bringing impartial evidence to the policy making process, comes in response to a perception that government needs to improve the quality of decision-making. Poor quality decision-making has been related to the loss of public confidence suffered in recent years. Traditionally, politicians and policy makers operated based on the belief that their voters were unquestioning. However, citizens are less and less inclined to take policy views on trust. Policy- makers are increasingly asked to explain not just what policy options they propose, and why they consider them appropriate, but also their understanding of their likely effectiveness.

Monitoring and evaluation can provide unique information about the performance of government policies, programmes and projects. It can identify what works, what does not work, and the reasons why. Monitoring and evaluation also provides information about the performance of a government, of individual ministries and of agencies, managers and their staff. Information on the performance of donors supporting the work of governments is also provided. It is tempting, but dangerous, to view monitoring and evaluation as having inherent value. The value of monitoring and evaluation comes not from conducting monitoring and evaluation or from having such information available; rather, the value comes from using it to help improve government performance.

The use of strong evidence can make a difference to policy making in at least five ways:

• Achieve recognition of a policy issue.The first stage in the process of policy formation occurs when the appearance of evidence reveals some aspect of social or economic life which had, until then, remained hidden from the general public and from policy-makers. Once this information is revealed, a variety of groups, such as civil servants, non-government organizations, development agencies or the media, lobby for a new policy issue to be recognized and addressed.

7 | P a g e c o m p i l e d b y N a n g o l i S u d i f o r l e a r n i n g p u r p o s e s

Page 8: Project m& E Master Document

• Inform the design and choice of policy.Once a policy issue has been identified, the next step is to analyze it, so that the extent and nature of the problem can be understood. This understanding provides the basis for any subsequent policy recommendations.

• Forecast the future.Attempting to read the future is also required in order to know whether a policy measure taken to alleviate a problem in the short- runs will be successful in the long-run as well. When a government is committed to attaining targets in the future, forecasting models allow an assessment of whether these targets are likely to be met.

• Monitor policy implementation.Once policies are being executed, information is required by policy- makers to monitor the expected results associated with the policies. Careful monitoring can reveal when key indicators are going off-track, which prompts further analysis leading to a change of policy.

• Evaluate policy impact. Measuring the impact of a policy intervention is more demanding of methodology and of information than is monitoring policy implementation. Incorporating an explicit mechanism for evaluating policy impact into the design of a policy is a key step to ensure its availability.

PLANNING FOR MONITORING AND EVALUATION

Monitoring and evaluation should be part of your planning process. It is very difficult to go back and set up monitoring and evaluation systems once things have begun to happen. You need to begin gathering information about performance and in relation to targets from the word go. The first information gathering should, in fact, take place when you do your needs assessment. This will give you the information you need against which to assess improvements over time. There is not one set way of planning for monitoring and evaluation. If you are familiar with logical framework analysis and already use it in your planning, this approach lends itself well to planning a monitoring and evaluation system.

When you do your planning process, you will set indicators. These indicators provide the framework for your monitoring and evaluation system. They tell you what you want to know and the kinds of information it will be useful to collect. In brief, the key aspects of planning for M & E include;-

A. What do we want to knowB. Different kinds of informationC. How will we get informationD. Who should be involved

A) WHAT DO WE WANT TO KNOW?This includes looking at indicators for both internal issues and external issues. What we want to know is linked to what we think is important. In development work, what we think is important is linked to our values. Most work in civil society organizations is underpinned by a value framework. It is this framework that determines the standards of acceptability in the work we do. The central values on which most development work is built are: Serving the disadvantaged, empowering the disadvantaged, changing society instead of just helping individuals, Sustainability, efficient use of resources, e.t.c.

So, the first thing we need to know is: Is what we are doing and how we are doing it meeting the requirements of these values? In order to answer this question, our monitoring and evaluation system must give us information about:

a) Who is benefiting from what we do? How much are they benefiting?

8 | P a g e c o m p i l e d b y N a n g o l i S u d i f o r l e a r n i n g p u r p o s e s

Page 9: Project m& E Master Document

b) Are beneficiaries passive recipients or does the process enable them to have some control over their lives?

c) Are there lessons in what we are doing that have a broader impact than just what is happening on our project?

d) Can what we are doing be sustained in some way for the long-term, or will the impact of our work cease when we leave?

e) Are we getting optimum outputs for the least possible amount of inputs?

Most times stakeholders may be divided on whether they want to know the process or the product? Should development work be evaluated in terms of the process (the way in which the work is done) or the product (what the work produces)? Often, this debate is more about excusing inadequate performance than it is about a real issue. Process and product are not separate in development work. What we achieve and how we achieve it is often the very same thing.

Both process and product should be part of your monitoring and evaluation system. But how do we make process and product and values measurable? The answer lies in the setting up of reliable indicators. Indicators are measurable or tangible signs that something has been done or that something has been achieved. In some studies, for example, an increased ‘number of television aerials’ in a community has been used as an indicator that the standard of living in that community has improved. An indicator of community empowerment might be an increased frequency of community members speaking at community meetings. If one were interested in the gender impact of, for example, drilling a well in a village, then you could use “increased time for involvement in development projects available to women” as an indicator. Common indicators for something like overall health in a community are the infant/child/maternal mortality rate, the birth rate, and nutritional status and birth weights. You could also look at less direct indicators such as the extent of immunisation, the extent of potable (drinkable) water available and so on. Indicators are an essential part of a monitoring and evaluation system because they are what you measure and/or monitor. Through the indicators you can ask and answer questions such as: Who?, How many?, How often?, How much? E.t.c.

But you need to decide early on what your indicators are going to be so that you can begin collecting the information immediately. You cannot use the number of television aerials in a community as a sign of improved standard of living if you don’t know how many there were at the beginning of the process.

B) Different Kinds of Information

These can be Quantitative, Qualitative or a combination of both Quantitative and Qualitative. It thus follows that Quantitative methods yield Quantitative information while Qualitative methods yield qualitative information.

Quantitative measurement tells you “how much or how many”. How many people attended a workshop, how many people passed their final examinations, how much a publication cost, how many people were infected with HIV, how far people have to walk to get water or firewood, and so on. Quantitative measurement can be expressed in absolute numbers (3 241 women in the sample are infected) or as a percentage (50% of households in the area have television aerials). It can also be expressed as a ratio (one doctor for every 30 000 people). One way or another, you get quantitative (number) information by counting or measuring.

Qualitative measurement tells you how people feel about a situation or about how things are done or how people behave. So, for example, although you might discover that 50% of the teachers in a school are unhappy about the assessment criteria used, this is still qualitative information, not quantitative information. You get qualitative information by asking, observing, interpreting.

Some people find quantitative information comforting – it seems solid and reliable and “objective”. They find qualitative information unconvincing and “subjective”. It is a mistake to say that

9 | P a g e c o m p i l e d b y N a n g o l i S u d i f o r l e a r n i n g p u r p o s e s

Page 10: Project m& E Master Document

“quantitative information speaks for itself”. It requires just as much interpretation in order to make it meaningful as does qualitative information. It may be a “fact” that enrolment of girls at schools in some developing countries is dropping – counting can tell us that, but it tells us nothing about why this drop is taking place. In order to know that, you would need to go out and ask questions – to get qualitative information. Choice of indicators is also subjective, whether you use quantitative or qualitative methods to do the actual measuring. Researchers choose to measure school enrolment figures for girls because they believe that this tells them something about how women in a society are treated or viewed.

The monitoring and evaluation process requires a combination of quantitative and qualitative information in order to be comprehensive. For example, we need to know what the school enrolment figures for girls are, as well as why parents do or do not send their children to school. Perhaps enrolment figures are higher for boys than for girls because a particular community sees schooling as a luxury and prefers to train boys to do traditional and practical tasks such taking care of animals. In this case, the higher enrolment of girls does not necessarily indicate higher regard for girls.

C) HOW WILL WE GET INFORMATION?

The methods for collecting information need to be built into the project teams’ action planning. You should be aiming to have a steady stream of information flowing into the project or organization about the work and how it is done, without overloading anyone. The information you collect must mean something: don’t collect information to keep busy, only do it to find out what you want to know, and then make sure that you store the information in such a way that it is easy to access.

Usually you can use the reports, minutes, attendance registers, financial statements that are part of your work anyway as a source of monitoring and evaluation information.

However, sometimes you need to use special tools that are simple but useful to add to the basic information collected in the natural course of your work. Some of the more common ones are:

Case studies Recorded observation Diaries Recording and analysis of important incidents (called “critical incident analysis”) Structured questionnaires One-on-one interviews Focus groups Sample surveys Systematic review of relevant official statistics.

Ways of gathering additional information:• A questionnaire survey A questionnaire survey can be used to find out more about the views and experiences of users, the wider community, agencies, etc. Use tick-boxes or questions that can be answered with a yes or no if you want to survey a lot of people, or ask a lot of questions. Questions that allow people to say more than just yes or no will give you more detailed information, but they take longer to fill in, a lot more time to analyse, and fewer people will fill them in. Responses to questionnaires are often low so think about offering a prize.

• In-depth interviews It is usually best to limit the number of in-depth interviews to those people whose involvement with the project gives them particular insights or valuable experience – but try to talk to a range of people who are likely to have different perspectives and views on your project.

10 | P a g e c o m p i l e d b y N a n g o l i S u d i f o r l e a r n i n g p u r p o s e s

Page 11: Project m& E Master Document

• Feedback forms You can find out whether people have found your training and other events useful by asking them to fill in a short form. Ask them, for example, what they found most and least useful; what they might do differently as a result; what could be improved.

• Focus groups and round tables A ‘focus group’ gathers together about half a dozen people who are broadly similar (for example, they are all single parents with young children) to discuss themes or questions you want to address in the evaluation. A ‘round table’ discussion is a similar idea, which brings together people with different perspectives (for example, teenage parents, teachers, health visitors).

• Diaries Ask key people to keep diaries of their involvement with the project.

• Press reports Gather and review press reports on the area (for example, you could see whether positive reports about the area are increasing).

• Observation Take photographs of your area over time, to see if you can observe any changes. Observe who contributes to meetings or comes to your centre, and see whether this changes over time. This will give you an idea of which types of people you are reaching (men, women, younger, older) and which of these types of people are playing a more confident role in the project.

• Case studies In order to make the evaluation manageable, you might want to pick a few pieces of work (case studies) to explore in detail, rather than trying to explore everything. Pick pieces of work that illustrate your main objectives.

• Evaluation workshops and review meetings Hold special workshops/review meetings of people who are involved in your project and use pictures, photographs or models, as well as the spoken word, to get feedback from participants.

D) WHO SHOULD BE INVOLVED?

Almost everyone in the project will be involved in some way in collecting information that can be used in monitoring and evaluation. This includes:

The administrator who takes minutes at a meeting or prepares and circulates the attendance register;

The fieldworkers who write reports on visits to the field; The bookkeeper who records income and expenditure.

In order to maximize their efforts, the project team needs to: Prepare recording formats that include measurement, either quantitative or qualitative, of

important indicators. For example, if you want to know how many men and how many women attended a meeting, include a gender column on your attendance list.

Prepare reporting formats that include measurement, either quantitative or qualitative, of important indicators. For example, if you want to know about community participation in activities or women’s participation specifically, structure the fieldworkers reporting format so that s/he has to comment on this, backing up observations with facts.

Record information in such a way that it is possible to work out what you need to know. For example, if you need to know whether a project is sustainable financially, and which elements of it cost the most, then make sure that your bookkeeping records reflect the relevant information.

It is a useful principle to look at every activity and say: What do we need to know about this activity, both process (how it is being done) and product (what it is meant to achieve), and what is the easiest way to find it out and record it as we go along?

11 | P a g e c o m p i l e d b y N a n g o l i S u d i f o r l e a r n i n g p u r p o s e s

Page 12: Project m& E Master Document

METHODOLOGIES AND METHODS IN MONITORING AND EVALUATION

“Methodology” as opposed to “methods” deals more with the kind of approach you use in your evaluation process. You could; for example, commission or do an evaluation process that looked almost entirely at written sources, primary or secondary: reports, data sheets, minutes and so on. Or you could ask for an evaluation process that involved getting input from all the key stakeholder groups.

The Methodologies and methods that can be adopted include;

a) The Logical Framework Approach (LFA) is a management methodology that is used in the design, monitoring and evaluation of projects. It is also widely known as Goal Oriented Project Planning (GOPP) or Objectives Oriented Project Planning (OOPP) or the "Project Matrix". It is useful to distinguish between the two terms: the Logical Framework Approach (LFA) and Logical Framework (LF or Logframe) because they are sometimes confused. While the Logical Framework Approach is a project design methodology, the LogFrame is a document.

The logical framework (log frame) is at times defined as a planning tool, designed before the start-up of project activities. The main elements of the log frame illustrate the project’s hierarchy of objectives and targets, its indicators for assessing achievements of the objectives, sources of the indicator information, and key assumptions outside the scope of the project that may influence its success. A log frame is constructed in a systematic and logical manner based on an analysis of information collected on constraints and opportunities for interventions to a specific problem. The log frame is referred to continuously throughout the life of a project; it is the most important document telling in detail what the project intends to achieve, and how it intends to achieve the objectives.

The Logical Framework takes the form of a four x four project table. The four rows are used to describe four different types of events that take place as a project is implemented: the project Activities, Outputs, Purpose and Goal (from bottom to top on the left hand side. The four columns provide different types of information about the events in each row. The first column is used to provide a Narrative description of the event. The second column lists one or more Objectively Verifiable Indicators (OVIs) of these events taking place. The third column describes the Means of Verification (MoV) where information will be available on the OVIs, and the fourth column lists the Assumptions. Assumptions are external factors that it is believed could influence (positively or negatively) the events described in the narrative column. The list of assumptions should include those factors that potentially impact on the success of the project, but which cannot be directly controlled by the project or program managers. And which if proved wrong might have major negative consequences for the project. A good project design should be able to substantiate its assumptions, especially those with a high potential to have a negative impact.

When developed within an organization, the LFA can be a means of articulating a common interpretation of the objectives of a project and how they will be achieved. The indicators and means of verification force clarifications as one would for a scientific endeavor: "you haven't defined it until you say how you will measure it." Tracking progress against carefully defined output indicators provides a clear basis for monitoring progress; verifying purpose and goal level progress then simplifies evaluation.

12 | P a g e c o m p i l e d b y N a n g o l i S u d i f o r l e a r n i n g p u r p o s e s

Page 13: Project m& E Master Document

Narrative description

Goal

Objectively Verifiable Indicators (OVIs)

Means of Verification (MoV)

Assumptions

PurposeOutputsActivities

PSI/Uganda Uganda Cross Generational Sex Program Log frame 2007 - 2009

Narrative Indicators M&E Assumptions

Goal 1. Reduction of new HIV infection rates among females aged between 15 -24yrs

MOH Sentinel Surveillance Sentinel Surveillance continuesTo reduce HIV

incidence in UgandaPurpose

1. To increase adoption of safer sexual behavior - rejection of cross-generational sex and knowledge of HIV status

1. Reduce the percentage of 15-24yr old young women having sexual relations with older men who are older by 10 years or more

Cross Gen Secondary School and University TRaC Surveys (2007 to 2009 by PSI Research)

Ministry of Education continues to support HIV prevention programs in secondary schools and tertiary institutions

Socio-economic conditions remain relatively stable.

2. Increase the percentage of 20-24yr old young women who know their HIV status AIC testing reports

AIC will be able to provide testing

Parents will consent

Government of Uganda continues to support HIV prevention activities

Outputs Increased opportunity of young women aged 15-24yrs to reject cross generational sex and to know their HIV status

Social Norm 1.1 Increase the percentage

of 15-19yr old young women who believe that their friends do not practice cross generational sex.

Cross Gen Secondary School and University TRaC Surveys (2007 to 2009 by PSI Research)

Communities actively participate in and support intervention activities

1.2 Increase the percentage of 20-24yr old young women who believe that their friends do not practice generational sex.

Availability

1.3 Increase the percentage of 20-24yr old women who know where to access mobile, youth friendly VCT services

Increased ability of young women aged 15-24yrs to

Self Efficacy Communities actively participate in and support intervention activities

13 | P a g e c o m p i l e d b y N a n g o l i S u d i f o r l e a r n i n g p u r p o s e s

Page 14: Project m& E Master Document

reject cross generational sex and to know their HIV status

1.1 Increase the percentage of young women aged 15-19yrs who believe that they can refuse to engage in a sexual relationship with a man 10 years their senior

Cross Gen Secondary School and University TRaC Surveys (2007 to 2009 by PSI Research)

Communities actively participate in and support intervention activities

1.2 Increase the percentage of young women aged 20-24yrs who believe that they can refuse to engage in a sexual relationship with a man 10 years their senior

Social Support

1.3 Increase the percentage of 15-19 yr old young women who discourage their friends from practising Cross Generational Sex

1.4 Increase the percentage of 20-24 yrs old young women who discourage their friends from practising Cross Generational sex

Increased motivation among young woman aged 15-24yrs to reject cross generational sex.

Threat – Susceptibility

1.1 Increase the percentage of young women aged 15-19yrs who believe that they are at an increased risk of contracting HIV by engaging in cross generational sex.

1.2 Increase the percentage of young women aged 20-24yrs who believe that they are at an increased risk of contracting HIV by engaging in cross generational sex.

Attitude

1.3 Increase the numbers of young women aged 15-19yrs who believe that the risk of HIV infection outweighs the short term material benefits of practicing cross generational sex.

Outcome Expectations 1.1 Increase the percentage of 20-24yr

old young women who know 2 or more benefits of testing for HIV

1.2 Increase the numbers of young women aged 20-24yrs who believe that the risk of HIV infection outweighs the short term material benefits of practicing cross generational sex.

ActivitiesQualitative research conducted and used to continuously inform intervention design

Develop communications campaigns

Conduct annual Tracking SurveysDevelop media campaign – Radio, TV, Print

Develop Peer Education ProgramDistribute IEC materials among universities and secondary school youth clubsPartner with Straight Talk to conduct in-school IPC on Cross Generational sex.Partner with AIC to conduct VCT among university and secondary school children. Conduct role model presentations in schools and universitiesTrain peer educators and develop and implement Peer Education ProgramConduct Parent skills trainings

What can we use it for?■ Improving quality of project and program designs—by requiring the specification of clear objectives, the use of performance indicators, and assessment of risks.■ Summarizing design of complex activities.■ Assisting the preparation of detailed operational plans.■ Providing objective basis for activity review, monitoring, and evaluation.

14 | P a g e c o m p i l e d b y N a n g o l i S u d i f o r l e a r n i n g p u r p o s e s

Page 15: Project m& E Master Document

ADVANTAGES:■ Ensures that decision-makers ask fundamental questions and analyze assumptions and risks.■ Engages stakeholders in the planning and monitoring process.■ When used dynamically, it is an effective management tool to guide implementation, monitoring and evaluation.

DISADVANTAGES:■ If managed rigidly, stifles creativity and innovation.■ If not updated during implementation, it can be a static tool that does not reflect changing conditions.■ Training and follow-up are often required.

b) Theory-Based EvaluationTheory-based evaluation has similarities to the LogFrame approach but allows a much more in-depth understanding of the workings of a program or activity—the “program theory” or “program logic.” In particular, it need not assume simple linear cause-and effect relationships. For example, the success of a government program to improve literacy levels by increasing the number of teachers might depend on a large number of factors. These include, among others, availability of classrooms and textbooks, the likely reactions of parents, school principals and schoolchildren, the skills and morale of teachers, the districts in which the extra teachers are to be located, the reliability of government funding, and so on. By mapping out the determining or causal factors judged important for success, and how they might interact, it can then be decided which steps should be monitored as the program develops, to see how well they are in fact borne out. This allows the critical success factors to be identified. And where the data show these factors have not been achieved, a reasonable conclusion is that the program is less likely to be successful in achieving its objectives.

What can we use it for?■ Mapping design of complex activities.■ Improving planning and management.

ADVANTAGES:■ Provides early feedback about what is or is not working, and why.■ Allows early correction of problems as soon as they emerge.■ Assists identification of unintended side-effects of the program.■ Helps in prioritizing which issues to investigate in greater depth, perhaps usingmore focused data collection or more sophisticated M&E techniques.■ Provides basis to assess the likely impacts of programs.

DISADVANTAGES:■ Can easily become overly complex if the scale of activities is large or if an exhaustivelist of factors and assumptions is assembled.■ Stakeholders might disagree about which determining factors they judge important, which can be time-consuming to address.

c) Formal SurveysFormal surveys can be used to collect standardized information from a carefully selected sample of people or households. Surveys often collect comparable information for a relatively large number of people in particular target groups.

What can we use them for?■ providing baseline data against which the performance of the strategy, program, or project can be compared.■ Comparing different groups at a given point in time.■ Comparing changes over time in the same group.

15 | P a g e c o m p i l e d b y N a n g o l i S u d i f o r l e a r n i n g p u r p o s e s

Page 16: Project m& E Master Document

■ Comparing actual conditions with the targets established in a program or project design.■ Describing conditions in a particular community or group.■ Providing a key input to a formal evaluation of the impact of a program or project.■ Assessing levels of poverty as basis for preparation of poverty reduction strategies.

ADVANTAGES:■ Findings from the sample of people interviewed can be applied to the wider target group or the population as a whole.■ Quantitative estimates can be made for the size and distribution of impacts.

DISADVANTAGES:■ Usually, results are not available for a long period of time.■ The processing and analysis of data can be a major bottleneck for the larger surveys even where computers are available.■ expensive and time-consuming.■ Many kinds of information are difficult to obtain through formal interviews.

d) Rapid Appraisal MethodsRapid appraisal methods are quick, low-cost ways to gather the views and feedback of beneficiaries and other stakeholders, in order to respond to decision-makers’ needs for information.

What can we use them for?■ Providing rapid information for management decision-making, especially at the project or program level.■ Providing qualitative understanding of complex socioeconomic changes, highly interactive social situations, or people’s values, motivations, and reactions.■ Providing context and interpretation for quantitative data collected by more formal methods.

ADVANTAGES:■ Low cost.■ Can be conducted quickly.■ Provides flexibility to explore new ideas.

DISADVANTAGES:■ Findings usually relate to specific communities or localities—thus difficult to generalize from findings.■ Less valid, reliable, and credible than formal surveys.

Examples of Rapid Appraisal Methods Include:-i. Key informant interview—a series of open-ended questions posed to individuals selected for

their knowledge and experience in a topic of interest. Interviews are qualitative, in-depth, and semi-structured. They rely on interview guides that list topics or questions.

ii. Focus group discussion—a facilitated discussion among 8–12 carefully selected participants with similar backgrounds. Participants might be beneficiaries or program staff, for example. The facilitator uses a discussion guide. Note-takers record comments and observations.

iii. Community group interview—a series of questions and facilitated discussion in a meeting open to all community members. The interviewer follows a carefully prepared questionnaire.

iv. Direct observation—use of a detailed observation form to record what is seen and heard at a program site. The information may be about ongoing activities, processes, discussions, social interactions, and observable results.

v. Mini-survey—a structured questionnaire with a limited number of closeended questions that is administered to 50–75 people. Selection of respondents may be random or ‘purposive’ (interviewing stakeholders at locations such as a clinic for a health care survey)

16 | P a g e c o m p i l e d b y N a n g o l i S u d i f o r l e a r n i n g p u r p o s e s

Page 17: Project m& E Master Document

e) Public Expenditure Tracking SurveysPublic expenditure tracking surveys (PETS) track the flow of public funds and determine the extent to which resources actually reach the target groups. The surveys examine the manner, quantity, and timing of releases of resources to different levels of government, particularly to the units responsible for the delivery of social services such as health and education. PETS are often implemented as part of larger service delivery and facility surveys which focus on the quality of service, characteristics of the facilities, their management, incentive structures, etc.

What can we use them for?■ Diagnosing problems in service delivery quantitatively.■ Providing evidence on delays, “leakage,” and corruption.

ADVANTAGES:■ Supports the pursuit of accountability when little financial information is available.■ Improves management by pinpointing bureaucratic bottlenecks in the flow of funds for service delivery.

DISADVANTAGES:■ Government agencies may be reluctant to open their accounting books.■ Cost is substantial. They Can be high until national capacities to conduct them have been established. For example, the first PETS in Uganda cost $60,000 for the education sector and $100,000 for the health sector.

f) Cost-Benefit and Cost-Effectiveness AnalysisCost-benefit and cost-effectiveness analysis are tools for assessing whether or not the costs of an activity can be justified by the outcomes and impacts. Cost-benefit analysis measures both inputs and outputs in monetary terms. Cost-effectiveness analysis estimates inputs in monetary terms and outcomes in non-monetary quantitative terms (such as improvements in student reading scores).

What can we use them for?■ Informing decisions about the most efficient allocation of resources.■ Identifying projects that offer the highest rate of return on investment.

ADVANTAGES:■ Good quality approach for estimating the efficiency of programs and projects.■ Makes explicit the economic assumptions that might otherwise remain implicit or overlooked at the design stage.■ Useful for convincing policy-makers and funders that the benefits justify the activity.

DISADVANTAGES:■ Fairly technical, requiring adequate financial and human resources available.■ Requisite data for cost-benefit calculations may not be available, and projected results may be highly dependent on assumptions made.■ Results must be interpreted with care, particularly in projects where benefits are difficult to quantify.

g) Participatory MethodsParticipatory methods provide active involvement in decision-making for those with a stake in a project, program, or strategy and generate a sense of ownership in the M&E results and recommendations.

What can we use them for?■ Learning about local conditions and local people’s perspectives and priorities todesign more responsive and sustainable interventions.

17 | P a g e c o m p i l e d b y N a n g o l i S u d i f o r l e a r n i n g p u r p o s e s

Page 18: Project m& E Master Document

■ Identifying problems and trouble-shooting problems during implementation.■ Evaluating a project, program, or policy.■ Providing knowledge and skills to empower poor people.

ADVANTAGES:■ Examines relevant issues by involving key players in the design process.■ Establishes partnerships and local ownership of projects.■ Enhances local learning, management capacity, and skills.■ Provides timely, reliable information for management decision-making.

DISADVANTAGES:■ Sometimes regarded as less objective.■ Time-consuming if key stakeholders are involved in a meaningful way.■ Potential for domination and misuse by some stakeholders to further their own interests.

Commonly Used Participatory Tools include;-i. Stakeholder analysis is the starting point of most participatory work and social assessments.

It is used to develop an understanding of the power relationships, influence, and interests of the various people involved in an activity and to determine who should participate, and when.

ii. Participatory rural appraisal is a planning approach focused on sharing learning between local people, both urban and rural, and outsiders. It enables development managers and local people to assess and plan appropriate interventions collaboratively often using visual techniques so that non-literate people can participate.

iii. Beneficiary assessment involves systematic consultation with project beneficiaries and other stakeholders to identify and design development initiatives, signal constraints to participation, and provide feedback to improve services and activities.

iv. Participatory monitoring and evaluation involves stakeholders at different levels working together to identify problems, collect and analyze information, and generate recommendations.

Participatory rural appraisal

Participatory rural appraisal (PRA) is an approach used by non-governmental organizations (NGOs) and other agencies involved in managing of projects which aims to incorporate the knowledge and opinions of rural people in the planning and management of projects and programmes. ‘Participatory Rural Appraisal (PRA)’ and ‘Participatory Learning and Action’ (PLA) were both developed from RRA ‘Rapid Rural Appraisal’.

Good features of PRAPRA has the following unique features.1. Iterative: goals and objectives are modified as the team realizes what is or is not relevant. The newly generated information helps to set the agenda for the later stages of the analysis. This involves the learning-as-you-go principle.2. Innovative: techniques are developed for particular situations depending on the skills and knowledge available.3. Interactive: the team and disciplines combine together in a way that fosters innovation and interdisciplinary. A system perspective helps make communication easy.4. Informal: focuses on partly structured and informal inter- views and discussions.5. In the community: learning takes place largely in the field, or immediately after, or in the intensive workshops. Communities perspectives are used to help define difference in field conditions.

18 | P a g e c o m p i l e d b y N a n g o l i S u d i f o r l e a r n i n g p u r p o s e s

Page 19: Project m& E Master Document

It should be noted that RRA (Rapid Rural Appraisal) and PRA (Participatory Rural Appraisal) are not the same: These two words are often used in the literature and One should know the difference.

In Rapid Rural Appraisal (RRA), information is elicited and extracted by outsiders. In other words, people go to rural areas, obtain information, and then bring it away to process and analyze. Also, the information is owned by outsiders and often not shared with rural people. While, In Participatory Rural Appraisal (PRA) information is owned and shared by local people. Outsiders (professionals) go to rural areas, but they facilitate rural people in collection, presentation and analysis of information by themselves. Also, the information is owned by rural people but usually shared with outsiders. There are seven major techniques used in PRA.1. Secondary data reviews: books, files, reports, news articles, maps, etc.2. Observation: direct and participant observation, wandering, DIY (do-it-yourself) activities.3. Semi-structured interviews: this is an informal, guided interview session, where only some of the questionsare pre-determined and new questions arise during the interview, in response to answers from those interviewed. The interviewees may be (1) individual farmers or house- holds, (2) key informants, (3) group interview, (4) community meeting, (5) chains (sequences) of interviews. The interview is conducted by a multi-disciplinary team of 2-4 persons and the discussion is lead by different people in different occasions.4. Analytical game: this is a quick game to find out a groups list of priorities, performances, ranking, scoring, or stratification.5. Stories and portraits: colorful description of situation, local history, trend analysis, etc.6. Diagrams: maps, aerial photos, transects, seasonal calendars, Venn diagram, flow diagram, historical profiles, ethno-history, time lines, etc.7. Workshop: Locals and outsiders are brought together to discuss the information and ideas intensively.

METHODOLOGICAL ISSUESNot all methods are relevant to a given design of project as far as collecting information to inform decision making in that project is concerned. As such, there is need to choose the most appropriate. Methodological issues include questions on what the population is? What method to use? Whether to sample or not, issues of validity, issues of data sources e.t.c. A good project team will ensure that it defends each methodological item it adopts i.e. gives credible reasons for why or why not a certain method was chosen and not the other. One key methodological issue that confuses managers is the issue of Data validity

In Establishing Data Validity, the manager needs to take note of the following:-• Check extreme values in data files for each item and unacceptable values for coded items.• Cross check the data recorded for extreme values in the questionnaire.• Check for abnormally high values of Standard Deviation.• Even though a code is provided for missing values, there can be confusion in missing values

and a legitimate value of zero.• Look for logical connections between variables such as travel mode and travel time; bribe

paid and corruption.• Poor data quality can often be traced to specific investigators or locations.• Random check for data entry problems by comparing data from questionnaires with print out

of data files.

Sampling is another important concept when using various tools for a monitoring or evaluation process. Sampling is not really a tool in itself, but used with other tools it is very useful. Sampling answers the question: Who do we survey, interview, include in a focus group etc? It is a way of narrowing down the number of possible respondents to make it manageable and affordable.

19 | P a g e c o m p i l e d b y N a n g o l i S u d i f o r l e a r n i n g p u r p o s e s

Page 20: Project m& E Master Document

Sometimes it is necessary to be comprehensive. This means getting to every possible household, or school or teacher or clinic etc. In an evaluation, you might well use all the information collected in every case during the monitoring process in an overall analysis. Usually, however, unless numbers are very small, for in-depth exploration you will use a sample. Sampling techniques include:

Random sampling (In theory random sampling means doing the sampling on a sort of lottery basis where, for example all the names go into a container, are tumbled around and then the required number are drawn out. This sort of random sampling is very difficult to use in the kind of work we are talking about. For practical purposes you are more likely to, for example, select every seventh household or every third person on the list. The idea is that there is no bias in the selection.);

Stratified sampling (e.g. every seventh household in the upper income bracket, every third household in the lower income bracket);

Cluster sampling (e.g. only those people who have been on the project for at least two years).

Mixed MethodsMixed methods, refers to the practice of using some combination of both quantitative and qualitative data gathering. Quantitative methods allow us to count events or number of participants, determine cost/participants, perform statistical analyses (mean, median, mode, standard deviation), and complete other calculations. Quantitative methods allow us to generalize the findings beyond the actual respondents to the relevant population. Qualitative methods allow us to record explanations, perceptions, and descriptions of experiences – often in the participants’ own words. Qualitative methods allow us to create narratives that provide an in-depth view and a more complete understanding of the context of the evaluation. Typically, a small number of individuals participate in a qualitative evaluation. Consequently, the results of this small number of participants cannot be generalized to the population.Each method has its own strengths and weaknesses. Using quantitative methods or qualitative methods in isolation limits what can be learned from the evaluation, what can be reported, and what can be recommended, with any confidence, as a result of the evaluation. Used in combination, however, the individual strengths of quantitative and qualitative methods can be maximized and the weaknesses minimized. More importantly, a synergy can be generated when using mixed methods. Results from more than one method of data collection can be “triangulated,” providing greater validity and enhanced understanding. A survey of participants may provide a great deal of information about what services are most desired (and least desired); an interview of a small number of the participants may then provide in-depth information concerning why those services are most desired (or least desired) and, importantly, what characteristics make a particular type of service most desired.

The table below provides a list of tools one can useTool Description Usefulness DisadvantagesInterviews These can be structured, semi-

structured or unstructured (see Glossary of Terms). They involve asking specific questions aimed at getting information that will enable indicators to be measured. Questions can be open-ended or closed (yes/no answers). Can be a source of qualitative and quantitative information.

Can be used with almost anyone who has some involvement with the project. Can be done in person or on the telephone or even by e-mail. Very flexible.

Requires some skill in the interviewer. For more on interviewing skills, see later in this toolkit.

Key informant interviews

These are interviews that are carried out with specialists in a topic or someone who may be able to shed a particular light on the process.

As these key informants often have little to do with the project or organization, they can be quite objective and offer useful insights.

Needs a skilled interviewer with a good understanding of the topic. Be careful not to turn something into an absolute truth (cannot be challenged) because

20 | P a g e c o m p i l e d b y N a n g o l i S u d i f o r l e a r n i n g p u r p o s e s

Page 21: Project m& E Master Document

They can provide something of the “big picture” where people more involved may focus at the micro (small) level.

it has been said by a key informant.

Questionnaires These are written questions that are used to get written responses which, when analyzed, will enable indicators to be measured.

This tool can save lots of time if it is self-completing, enabling you to get to many people. Done in this way it gives people a feeling of anonymity and they may say things they would not say to an interviewer.

With people who do not read and write, someone has to go through the questionnaire with them which means no time is saved and the numbers one can reach are limited. With questionnaires, it is not possible to explore what people are saying any further.Questionnaires are also over-used and people get tired of completing them. Questionnaires must be piloted to ensure that questions can be understood and cannot be misunderstood. If the questionnaire is complex and will need computerized analysis, you need expert help in designing it.

Focus groups In a focus group, a group of about six to 12 people are interviewed together by a skilled interviewer/facilitator with a carefully structured interview schedule. Questions are usually focused around a specific topic or issue.

This can be a useful way of getting opinions from quite a large sample of people.

It is quite difficult to do random sampling for focus groups and this means findings may not be generalised. Sometimes people influence one another either to say something or to keep quiet about something. If possible, focus groups interviews should be recorded and then transcribed. This requires special equipment and can be very time-consuming.

Community meetings

This involves a gathering of a fairly large group of beneficiaries to who questions, problems, situations are put for input to help in measuring indicators.

Community meetings are useful for getting a broad response from many people on specific issues. It is also a way of involving beneficiaries directly in an evaluation process, giving them a sense of ownership of the process. They are useful to have at critical points in community projects.

Difficult to facilitate – requires a very experienced facilitator. May require breaking into small groups followed by plenary sessions when everyone comes together again.

Fieldworker reports

(See also fieldworker reporting format under

Structured report forms that ensure that indicator-related questions are asked and answers recorded, and observations recorded on every visit.

Flexible, an extension of normal work, so cheap and not time-consuming.

Relies on field workers being disciplined and insightful.

21 | P a g e c o m p i l e d b y N a n g o l i S u d i f o r l e a r n i n g p u r p o s e s

Page 22: Project m& E Master Document

examples)Ranking This involves getting people to

say what they think is most useful, most important, least useful etc.

It can be used with individuals and groups, as part of an interview schedule or questionnaire, or as a separate session. Where people cannot read and write, pictures can be used.

Ranking is quite a difficult concept to get across and requires very careful explanation as well as testing to ensure that people understand what you are asking. If they misunderstand, your data can be completely distorted.

Visual/audio stimuli

These include pictures, movies, tapes, stories, role plays, photographs, used to illustrate problems or issues or past events or even future events.

Very useful to use together with other tools, particularly with people who cannot read or write.

You have to have appropriate stimuli and the facilitator needs to be skilled in using such stimuli.

Rating scales This technique makes use of a continuum, along which people are expected to place their own feelings, observations etc. People are usually asked to say whether they agree strongly, agree, don’t know, disagree, disagree strongly with a statement. You can use pictures and symbols in this technique if people cannot read and write.

It is useful to measure attitudes, opinions, and perceptions.

You need to test the statements very carefully to make sure that there is no possibility of misunderstanding. A common problem is when two concepts are included in the statement and you cannot be sure whether an opinion is being given on one or the other or both.

Critical event/incidentAnalysis

This method is a way of focusing interviews with individuals or groups on particular events or incidents. The purpose of doing this is to get a very full picture of what actually happened.

Very useful when something problematic has occurred and people feel strongly about it. If all those involved are included, it should help the evaluation team to get a picture that is reasonably close to what actually happened and to be able to diagnose what went wrong.

The evaluation team can end up submerged in a vast amount of contradictory detail and lots of “he said/she said”. It can be difficult not to take sides and to remain objective.

Participant observation

This involves direct observation of events, processes, relationships and behaviors. “Participant” here implies that the observer gets involved in activities rather than maintaining a distance.

It can be a useful way of confirming, or otherwise, information provided in other ways.

It is difficult to observe and participate. The process is very time-consuming.

Self-drawings This involves getting participants to draw pictures, usually of how they feel or think about something.

Can be very useful, particularly with younger children.

Can be difficult to explain and interpret.

INTERVIEWING SKILLS

Some do’s and don’ts for interviewing:

□ DO test the interview schedule beforehand for clarity, and to make sure questions cannot be misunderstood.

□ DO state clearly what the purpose of the interview is.□ DO assure the interviewee that what is said will be treated in confidence.

22 | P a g e c o m p i l e d b y N a n g o l i S u d i f o r l e a r n i n g p u r p o s e s

Page 23: Project m& E Master Document

□ DO ask if the interviewee minds if you take notes or tape record the interview.□ DO record the exact words of the interviewee as far as possible.□ DO keep talking as you write.□ DO keep the interview to the point.□ DO cover the full schedule of questions.□ DO watch for answers that are vague and probe for more information.□ DO be flexible and note down everything interesting that is said, even if it isn’t on the schedule.□ DON’T offend the interviewee in any way.□ DON’T say things that are judgmental.□ DON’T interrupt in mid-sentence.□ DON’T put words into the interviewees mouth.□ DON’T show what you are thinking through changed tone of voice.

ANALYZING INFORMATIONWhether you are looking at monitoring or evaluation, at some point you are going to find yourself with a large amount of information and you will have to decide how to make sense of it or to analyze it. If you are using an external evaluation team, it will be up to this team to do the analysis, but, sometimes in evaluation, and certainly in monitoring, you, the organization or project, have to do the analysis.

Analysis is the process of turning the detailed information into an understanding of patterns, trends, interpretations. The starting point for analysis in a project or organizational context is quite often very unscientific. It is your intuitive understanding of the key themes that come out of the information gathering process. Once you have the key themes, it becomes possible to work through the information, structuring and organizing it. The next step is to write up your analysis of the findings as a basis for reaching conclusions, and making recommendations.

So, your process looks something like this:

23 | P a g e c o m p i l e d b y N a n g o l i S u d i f o r l e a r n i n g p u r p o s e s

Determine key indicators for the evaluation/monitoring process

Collect information around the indicators

Develop a structure for your analysis, based on your intuitive understanding of emerging themes and concerns, and where you suspect there have been variations from what you had hoped and/or expected.

Go through your data, organising it under the themes and concerns.

Identify patterns, trends, possible interpretations.

Write up your findings and conclusions. Work out possible ways forward (recommendations).

Page 24: Project m& E Master Document

Taking action

Monitoring and evaluation have little value if the organization or project does not act on the information that comes out of the analysis of data collected. Once you have the findings, conclusions and recommendations from your monitoring and evaluation process, you need to:

Report to your stakeholders; Learn from the overall process; Make effective decisions about how to move forward; and, if necessary, Deal with resistance to the necessary changes within the organization or project, or even

among other stakeholders.

REPORTING

Whether you are monitoring or evaluating, at some point, or points, there will be a reporting process. This reporting process follows the stage of analyzing information. You will report to different stakeholders in different ways, sometimes in written form, sometimes verbally and, increasingly, making use of tools such as PowerPoint presentations, slides and videos.

Below is a table, suggesting different reporting mechanisms that might be appropriate for different stakeholders and at different times in project cycles.

Target group Stage of project cycle Appropriate formatBoard Interim, based on monitoring analysis Written report

Evaluation Written report, with an Executive Summary, and verbal presentation from the evaluation team.

Management Team Interim, based on monitoring analysis Written report, discussed at management team meeting.

Evaluation Written report, presented verbally by the evaluation team.

Staff Interim, based on monitoring Written and verbal presentation at departmental and team levels.

Evaluation Written report presented verbally by evaluation team and followed by in-depth discussion of relevant recommendations at departmental and team levels.

Beneficiaries Interim, but only at significant points, and evaluation

Verbal presentation, backed up by summarized document, using appropriate tables, charts, visuals and audio-visuals. This is particularly important if the organization or project is contemplating a major change that will impact on beneficiaries.

Donors Interim, based on monitoring Summarized in a written report.

Evaluation Full written report with executive summary or a special version, focused on donor concerns and interests.

Wider development community

Evaluation Journal articles, seminars, conferences, websites.

24 | P a g e c o m p i l e d b y N a n g o l i S u d i f o r l e a r n i n g p u r p o s e s

Page 25: Project m& E Master Document

DEALING WITH RESISTANCE

Not everyone will be pleased about any changes in plans you decide need to be made. People often resist change. Some of the reasons for this include:

People are comfortable with things the way they are – they don’t want to be pushed out of their comfort zones.

People worry that any changes will lessen their levels of productivity – they feel judged by what they do and how much they do, and don’t want to take the time out necessary to change plans or ways of doing things.

People don’t like to rush into change – how do we know that something different will be better? They spend so long thinking about it that it is too late for useful changes to be made.

People don’t have a “big picture”. They know what they are doing and they can see it is working, so they can’t see any reason to change anything at all.

People don’t have a long term commitment to the project or the organisation – they see it as a stepping stone on their career path. They don’t want change because it will delay the items they want to be able to tick off on their curriculum vitaes.

People feel they can’t cope – they have to keep doing what they are doing but also work at bringing about change. It’s all too much.

How can you help people accept changes?

Make the reasons why change is needed very clear – take people through the findings and conclusions of the monitoring and evaluation processes, involve them in decision-making.

Help people see the whole picture – beyond their little bit to the overall impact on the problem analyzed.

Focus on the key issues – we have to do something about this! Recognize anger, fear, resistance. Listen to people, give them the opportunity to express

frustration and other emotions. Find common ground – things that they also want to see changed. Encourage a feeling that change is exciting, that it frees people from doing things that are not

working so they can try new things that are likely to work, that it releases productive energy. Emphasize the importance of everyone being committed to making it work. Create conditions for regular interaction – anything from a seminar to graffiti on a notice

board - to discuss what is happening and how it is going. Pace change so that people can deal with it.

ETHICAL ISSUES IN EVALUATION

Something is said to be ethical if it conforms to accepted standards, that is, it is consistent with agreed principles of correct moral conduct. Anything that the evaluator does that shows dishonesty is said to be unethical and therefore not acceptable in the field of project management. Like any humans, it is okay for a project manager to make mistakes but it is unethical to try to conceal such mistakes.

As regards ethical conduct, the evaluators are expected to meet the following; Evaluators must have personal and professional integrity. Evaluators must respect the right of institutions and individuals to provide information in

confidence and ensure that sensitive data cannot be traced to its source. Evaluators must take care that those involved in evaluations have a chance to examine the statements attributed to them.

Evaluators must be sensitive to beliefs, manners and customs of the social and cultural environments in which they work.

Evaluators must be sensitive to and address issues of discrimination and gender inequality.

25 | P a g e c o m p i l e d b y N a n g o l i S u d i f o r l e a r n i n g p u r p o s e s

Page 26: Project m& E Master Document

Evaluations sometimes uncover evidence of wrongdoing. Such cases must be reported discreetly to the appropriate investigative body. Also, the evaluators are not expected to evaluate the personal performance of individuals and must balance an evaluation of management functions with due consideration for this principle.

Below are some areas of unethical dilemmas. It is pertinent upon each a manager to ensure that he avoids the unethical instances below;

a) Ethical dilemmas when reporting findings evaluation findings or reports are laundered to omit negative findings evaluation findings exaggerate successes and positive findings evaluation findings are suppressed altogether evaluation findings are released belated so they are no longer relevant evaluation findings are prematurely released or leaked to the public

b) Ethical dilemmas when planning evaluation evaluation questions become so diffuse, or so complex, or so multiple that the evaluation is

not doable the evaluation is under-funded some stakeholder perspectives are excluded

c) Ethical dilemmas when doing evaluation the evaluation is dumbed down by an advocacy of weak measures or low standards information is withheld, distorted, or hidden evaluation information is used as ammunition by one stakeholder against other stakeholders promises of confidentiality to stakeholders are breached

d) Ethical dilemmas introduced by the evaluator Personal or financial interest in the evaluation Lack of knowledge or skill in technique or method Lack of cultural sensitivity Lack of respect for local cultures and values Ideological positions that predetermine the evaluation outcome Propensity to deliver positive evaluations to increase job security making promises evaluators cannot deliver on Making decisions without consultation with appropriate stakeholders.

EVALUATING VALUE FOR MONEY IN THE PUBLIC SECTOR

Value for Money has always been and still is a Fundamental part of the Public Service. With raising public Expenditures, emphasis is being put on achieving more from this expenditure base. To critically deliver on the objective of VFM, there is need for effective planning, monitoring and evaluation systems. One key challenge that management faces is the development and embedment of an evaluation culture and practice in the day to day activities of their programs.

VFM is usually defined in terms of economy, efficiency and effectiveness. These three concepts need to be at the core of our day to day work; so as to reliably influence;-

what we spend money on how much we allocate to a projects/programmes or initiatives and how we go about implementing these projects/programmes

Proper planning and the development of effective programmes can be guided by the following questions

26 | P a g e c o m p i l e d b y N a n g o l i S u d i f o r l e a r n i n g p u r p o s e s

Page 27: Project m& E Master Document

What are we trying to achieve? Why are we trying to achieve it? What will this initiative contribute? Accurate costing of projects Looking at whole of life costs

Proper Monitoring and Evaluation of progress and performance can be guided by the following questions

How much is being spent? What outputs are being delivered? Are we achieving our goals/objectives? Evaluation can also inform the development of indicators for performance monitoring

Value for Money = Input * output * outcome Expenditure input output

The relation of the Inputs to their costs is commonly called the ‘Economics’ of a project and answers the question “How cheap did we shop?”. The relation of the Outputs to the Inputs of a project is called the ‘Efficiency’ of a project, and answers the question “How productively did we use our resources?” And the relation of Outcomes to Outputs defines the ‘Effectiveness’ of a project and provides information about how effective each outcome was in bringing about our objectives. Combining all three measures results in a formula to calculate the ratio of Outcome to Expenditure, which answers our initial question: “What value did we get for our money?” In order to fill and use this formula, we need to• Measure Inputs, Expenditures, Outputs and Outcomes, and put them in relation to each other,• Make sure the change in outputs and outcomes derives from the project to be evaluated, and from nowhere else.

STRUCTURE OF THE EVALUATION REPORT

Certain standard elements should be addressed in every evaluation report. However, the precise structure of a particular evaluation report depends on the specific focus, needs and circumstances of the project and its evaluation. The evaluation report should ideally not exceed 20 to 30 pages.

Cover page with key project and evaluation data

Key project data: Project title, project number, donor(s), project start and completion dates, budget, technical area, geographic coverage. start and completion dates of the evaluation mission, name(s) of the evaluator(s), date of submission of evaluation report.

1. Abstract / EXECUTIVE SUMMARYMaximum of 3-5 pages; – the shorter the better – intended to provide enough information for busy people, but also to tease people’s appetite so that they want to read the full report.)Focuses on key findings and recommendations and should be understandable as a stand-alone document. When preparing the abstract it should be kept in mind that it will appear in the evaluation databasePREFACE (Not essential, but a good place to thank people and make a broad comment about the process, findings etc.)CONTENTS PAGE (With page numbers, to help people find their way around the report.)

2. Brief background on the project and its logic

27 | P a g e c o m p i l e d b y N a n g o l i S u d i f o r l e a r n i n g p u r p o s e s

Page 28: Project m& E Master Document

Brief description of the project’s objectives and rationale; Project logic and strategy at approval and during implementation, including agreed revisions; Statement of implementation and delivery of the project.

3. Purpose, scope and clients of evaluation Type of evaluation; Brief description of purpose and scope of the evaluation; Clients of the evaluation; Analytical focus of the evaluation.

4. Methodology Brief description of the methodology used; Information sources, including remarks on the gaps and limitations; Remarks on the limitations of the methodology and problems encountered in data gathering and analysis, if any.

5. Review of implementation Brief review of the main stages in the implementation of the project highlighting main milestones and challenges.

6. Presentation of findings Based on key evaluation questions in the analytical framework; Covering all key evaluation criteria but concentrating on key issues: • Relevance and strategic fit of project; • Validity of project design; • Project progress and effectiveness; • Efficiency of resource use; • Effectiveness of management arrangements; • Impact orientation and sustainability; • Special concerns (if applicable); It can be useful to order findings by these categories but it is not mandatory.

7. Conclusions Here you would draw conclusions from the findings – the interpretation, what they mean. It is quite useful to use a SWOT Analysis

8. Recommendations Worded in a constructive manner and aimed at improving the project, future projects, the programme, and general ILO strategies; Presented in a clear, concise, concrete and actionable manner, making concrete suggestions for improvements i.e., “who should do what to improve what”; Specify who is called upon to act. It can be useful to group recommendations by addressee.

9. Lessons learned Observations, insights, and practices extracted from the evaluation that are of general interest beyond the project sphere and contribute to wider organizational learning; Highlight good practices, i.e., experience about what has been tried with a good result. Good practices are a way of making lessons learned more concrete. It must be possible to generalize or replicate them in other projects or work contexts otherwise they are not interesting. (“What has worked particularly well and why? How can it be generalized or replicated?”)

10. Annexes / APPENDICES:Should include the TOR and list of persons contacted; can include any other relevant information, i.e., tables with supplementary data, survey questionnaires, possibly a map of the area and so on.

28 | P a g e c o m p i l e d b y N a n g o l i S u d i f o r l e a r n i n g p u r p o s e s