credit seminor 2 - copy
TRANSCRIPT
Presented by:Gireesh S
M.Sc 1st YearRoll No: 20646
Credit seminar on
Programme Evaluation
Div
isio
n of
Agr
icul
tura
l Ext
ensi
on -I
AR
I
BEFORE
AFTER
Definition of evaluation
Measures the change between before and after a programme, or project, is carried out
Div
isio
n of
Agr
icul
tura
l Ext
ensi
on -I
AR
I Evaluation takes into account unexpected consequences TARGETSRESULTS
What is “evaluation”?
Latin word ‘vatere’ means strong or valiant It is the process of establishing the worth or value of something. In dictionary meaning, it is determination of value, the
strength or worth something, an appraisal, an estimate of the form of or making a judgment of something
“ the systematic investigation of the merit, worth, or significance of an ‘object’ ” Michael Scriven
Div
isio
n of
Agr
icul
tura
l Ext
ensi
on -I
AR
I
Div
isio
n of
Agr
icul
tura
l Ext
ensi
on -I
AR
I
Evaluation is a process of individual and collective learning (choudhary and tandon n.d ) we can learn from our sucesses, but especially from our failures. Korton(1980) calls this “embracing error” evaluation provides that occation. E – Evidence V – Value A – Action L –Long-term U- Utilization A- Aimed T- Towards E- Efficiency
What is Program?
Typically, organizations work from their mission to identify several overall goals which must be reached to accomplish their mission.
In public agencies and NGOs, each of these goals often becomes a program.
Public and nonprofit programs are organized methods to provide certain related services to constituents, e.g., clients, customers, patients, etc.
Programs must be evaluated to decide if the programs are indeed useful to constituents.
Div
isio
n of
Agr
icul
tura
l Ext
ensi
on -I
AR
I
What is Program Evaluation..? Purposeful, systematic, and careful collection and analysis of
information used for the purpose of documenting the effectiveness and impact of programs, establishing accountability, and identifying areas needing change and improvement
Evaluation is the systematic application of scientific methods to assess the design, implementation, improvement or outcomes of a program (Rossi & Freeman, 1993; Short, Hennessy, & Campbell, 1996). “…the systematic assessment of the operation and/or
outcomes of a program or policy, compared to a set of explicit or implicit standards as a means of contributing to the improvement of the program or policy…”
Weiss Carol
Div
isio
n of
Agr
icul
tura
l Ext
ensi
on -I
AR
I
Conti …Program evaluation is carefully collecting information
about a program or some aspect of a program in order to make necessary decisions about the program.
Don't worry about what type of evaluation you need or are doing -- worry about what you need to know to make the program decisions you need to make, and worry about how you can accurately collect and understand that information.
D
ivis
ion
of A
gric
ultu
ral E
xten
sion
-IA
RI
Research vs. Evaluation
9
Production of generalizable knowledge
Researcher-derived questions
More controlled setting Clearer role Published Clearer allegiance
Research EvaluationSystematicMethods
D
ivis
ion
of A
gric
ultu
ral E
xten
sion
-IA
RI
Knowledge intended for use
Program- or funder-derived questions
Action settingRole conflictsOften not
publishedMultiple
allegiances
Div
isio
n of
Agr
icul
tura
l Ext
ensi
on -I
AR
I “Research seeks to prove, evaluation seeks to improve…”
M.Q. Patton
… to improve a program
Then no evaluation is good unless findings are used to make a difference
If the Goal of Evaluation is…
Some Myths about Program Evaluation
Many people believe evaluation is a useless activity that generates lots of boring data with useless conclusions.
More recently (especially as a result of Michael Patton's development of utilization-focused evaluation), evaluation has focused on utility, relevance and practicality at least as much as scientific validity.
Div
isio
n of
Agr
icul
tura
l Ext
ensi
on -I
AR
I
Myths about Program Evaluation… Many people believe that evaluation is about proving the
success or failure of a program.
This myth assumes that success is implementing the perfect program and never having to hear from employees, customers or clients again -- the program will now run itself perfectly.
This doesn't happen in real life.
Success is remaining open to continuing feedback and adjusting the program accordingly.
Evaluation gives you this continuing feedback.
Div
isio
n of
Agr
icul
tura
l Ext
ensi
on -I
AR
I
Some Myths about Program Evaluation...
Many believe that evaluation is a highly unique and complex process that occurs at a certain time in a certain way, and almost always includes the use of outside experts.
Evaluation is only for expert
Quantitative data are the best
There is one best evaluation approach
Div
isio
n of
Agr
icul
tura
l Ext
ensi
on -I
AR
I
Div
isio
n of
Agr
icul
tura
l Ext
ensi
on -I
AR
I
To gain insight about a program and its operations – to see where we are going and where we are coming from, and to find out what works and what doesn’t
To improve practice – to modify or adapt practice to enhance the success of activities
To assess effects – to see how well we are meeting objectives and goals, how the program benefits the community, and to provide evidence of effectiveness
Why Evaluate Programs?
Purpose of Program Evaluation Demonstrate program effectiveness to funders Improve the implementation and effectiveness of programs Better manage limited resources Document program accomplishments Justify current program funding Support the need for increased levels of funding Satisfy ethical responsibility to clients to demonstrate
positive and negative effects of program participation (Short, Hennessy, & Campbell, 1996).
Document program development and activities to help ensure successful replication
Div
isio
n of
Agr
icul
tura
l Ext
ensi
on -I
AR
I
Div
isio
n of
Agr
icul
tura
l Ext
ensi
on -I
AR
I
Effective evaluation is not an "event" that occurs at the end of a project, but is an ongoing process which helps decision makers better understand the project; how it is impacting participants, partner agencies and the community; and how it is being influenced/impacted by both internal and external factors.
W.K. Kellogg Foundation Evaluation Handbook, p. 3
Standards And PrinciplesThe professional evaluation association has
established both standards for high quality evaluations, and principles for conducting evaluations.
These are values that guide the practice of evaluation.
Div
isio
n of
Agr
icul
tura
l Ext
ensi
on -I
AR
I
Div
isio
n of
Agr
icul
tura
l Ext
ensi
on -I
AR
I
Standards
UtilityFeasibilityProprietyAccuracy
Engagestakeholders
Steps
Describethe program
Gather credibleevidence
Focus theEvaluation
designJustify
conclusions
Ensure useand share
lessons learned
The Four Standards of Evaluation
4 Standards for EvaluationThe Utility Standards are intended to ensure that an evaluation
will serve the information needs of intended users.The Feasibility Standards are intended to ensure that an
evaluation will be realistic, prudent, diplomatic, and frugalThe Accuracy Standards are intended to ensure that an
evaluation will reveal and convey technically adequate information about the features that determine worth or merit of the program being evaluated.
The Propriety Standards are intended to ensure that an evaluation will be conducted legally, ethically, and with due regard for the welfare of those involved in the evaluation, as well as those affected by its results.
Div
isio
n of
Agr
icul
tura
l Ext
ensi
on -I
AR
I
Principle: Systematic Inquiry Evaluators conduct systematic, data-based inquiries about whatever is being evaluated.
Principles for Evaluation
Principle: Competence Evaluators provide competent performance to stakeholders.Principle: Integrity/HonestyEvaluators ensure the honesty and integrity of the entire evaluation process.
Principle: Respect for PeopleEvaluators respect the security, dignity and self-worth of the respondents, program participants, clients, and other stakeholders with whom they interact. Principle: Responsibilities for General and Public WelfareEvaluators articulate and take into account the diversity of interests and values that may be related to the general and public welfare.
Div
isio
n of
Agr
icul
tura
l Ext
ensi
on -I
AR
I
The Evaluation Process: Characteristics
SystematicEmpiricalLogicalInductive
ExplanatorySpecificReplicable and
Valid
Div
isio
n of
Agr
icul
tura
l Ext
ensi
on -I
AR
I
When to Conduct Evaluation?
Conception
Completion
Planning a NEW program
Assessing a DEVELOPINGprogram
Assessing a STABLE, MATURE program
Assessing a program after it has ENDED
The stage of program development influences the reason for program evaluation.
Div
isio
n of
Agr
icul
tura
l Ext
ensi
on -I
AR
I
Ongoing:During the programme
(often midway)
Terminal:6-12 months
after programme completion
Ex-post: Several years after end of programme
The planning cycle
The planning cycle
EvaluationNeed not be
ExpensiveComplicatedTime consuming
Some evaluation is better than noneExternal evaluator is sometimes seen as more objective
than internalEvaluator should be qualifiedEvaluation plan should be meaningful, related to goals and
objectives, and be an honest examination of program
Div
isio
n of
Agr
icul
tura
l Ext
ensi
on -I
AR
I
How to do programme evaluation?
The Big Question…?
Div
isio
n of
Agr
icul
tura
l Ext
ensi
on -I
AR
I
Div
isio
n of
Agr
icul
tura
l Ext
ensi
on -I
AR
I
Relevance
Progress
Efficiency
Effectiveness
Impact
Categories Of Evaluation
Critical Issues To Be Considered In The Evaluation Design
Description of the programme Mission Short and long term goals and objectives Target population Description of the activities Priorities of the programme Changes in the programme Current situation of the programme Available resources Flow chart of the programme
Div
isio
n of
Agr
icul
tura
l Ext
ensi
on -I
AR
I
A Programme Evaluation Framework
29
Focus Methods Tools Analysis Reporting
PurposeStakeholdersAreas: COAT
TimeData Collection PlanQualitativeQuantitativeSampling
ChecklistsQuestionn- airesInterview schedulesObservation sheets
Pre-testData Colle- ctionAnalysis
InterpretationAction PlanReporting
Questions Blueprint Tools Findings Decisions
Div
isio
n of
Agr
icul
tura
l Ext
ensi
on -I
AR
I
Div
isio
n of
Agr
icul
tura
l Ext
ensi
on -I
AR
I
Goals and objectivesActivity standardizationStatistical model designData collectionData analysisExplanation of success levelRecommendations
Stages in The Evaluation Process
Example of FocusPurpose: To improve the operation of the existing
programme
To assess the impact of the programme
To decide on continuing with the programme
Any other?
Div
isio
n of
Agr
icul
tura
l Ext
ensi
on -I
AR
I
COAT QuestionsComponents: What are the components of the
programme? Need to cover all…
Outcomes: What the programme is intended to achieve?
Activities: What activities are performed during the programme delivery? And how are they organized?
Target Groups: For whom the programme is designed, and what are their needs?
D
ivis
ion
of A
gric
ultu
ral E
xten
sion
-IA
RI
Process Evaluation Data – Types and Collection
TYPES: (Which is more useful?)QuantitativeQualitative
How to collect data for process evaluation:InterviewsFocus groupsObservationQuestionnaires/Surveys/ChecklistCase studies Analysis of existing documents
Div
isio
n of
Agr
icul
tura
l Ext
ensi
on -I
AR
I
Major categories and models of program evaluation
“Evaluation models either describe what evaluators do or prescribe what they should do” (Alkin and Ellett, 1990, p.15)
Div
isio
n of
Agr
icul
tura
l Ext
ensi
on -I
AR
I
Major categories and models of program evaluation
Formative evaluation Goal: to improve a program Conducted during the implementation of a program. Results are usually implemented immediately (process evaluation)
Summative evaluation - global formal judgment about the
program’s future Goal: to reach a conclusion as to whether a program should be stopped, continued as it is, or modified
Completed at the end of a program (ex post) after full implementation and functioning (outcome evaluation)
Div
isio
n of
Agr
icul
tura
l Ext
ensi
on -I
AR
I
Formative vs. Summative evaluation
D
ivis
ion
of A
gric
ultu
ral E
xten
sion
-IA
RI
Formative evaluation An essential part of instructional design models It is the systematic collection of information for the purpose of informing decisions to design
and improve the product / instruction (Flagg, 1990)
Why Formative Evaluation? The purpose of formative evaluation is to improve the effectiveness of the instruction at its formation stage with
systematic collection of information and data (Dick & Carey, 1990; Flagg, 1990).
Div
isio
n of
Agr
icul
tura
l Ext
ensi
on -I
AR
I
StrategiesExpert review
Content experts: the scope, sequence, and accuracy of the program’s content
Instructional experts: the effectiveness of the program
Graphic experts: appeal, look and feel of the program
Div
isio
n of
Agr
icul
tura
l Ext
ensi
on -I
AR
I
Strategies II
User reviewA sample of targeted learners
whose background are similar to the final intended users;Observations: users’ opinions,
actions, responses, and suggestions
Div
isio
n of
Agr
icul
tura
l Ext
ensi
on -I
AR
I
Who is the evaluator?
Internal ; Member of design and development team
Summative evaluationThe collection of data to summarize the strengths and weakness of
instructional materials to make decision about whether to maintain or adopt the materials.
Strategies IExpert judgment field trails external
evaluator
Div
isio
n of
Agr
icul
tura
l Ext
ensi
on -I
AR
I
Outcome vs. Impact evaluation• Outcome evaluation Measures specific objectives to assess the change Focus on short term or intermediate results
Impact evaluation Long term outcomes OR in a more large sense as positive, negative,
direct, indirect, expected, unexpected, social, economic, environmental etc.
Note: No consistency in the evaluation literature on definition of impact evaluation or on short, medium or long term results (sequence of results is more important than timeframe)
Div
isio
n of
Agr
icul
tura
l Ext
ensi
on -I
AR
I
Prescriptive Models
“Prescriptive models are more specific than descriptive models with respect to procedures for planning, conducting, analyzing, and reporting evaluations” (Reeves & Hedberg, 2003, p.36).
Examples: Objective-Driven Evaluation Model (1930s)
Kirpatrick: Four-Level Model of Evaluation (1959)Suchman: Experimental / Evaluation Model (1960s)Stufflebeam: CIPP Evaluation Model (1970s)
Div
isio
n of
Agr
icul
tura
l Ext
ensi
on -I
AR
I
Objective-Driven Evaluation Model (1930s):
R.W. TylerA professor in Ohio State
UniversityThe director of the Eight Year
Study (1934)Tyler’s objective-driven model is
derived from his Eight-Year Study
Div
isio
n of
Agr
icul
tura
l Ext
ensi
on -I
AR
I
Objective-Driven Evaluation Model (1930s)
The essence: The attainment of objectives is the only criteria to determine whether a program is good or bad. His Approach: In designing and evaluating a program: set goals, derive
specific behavioral objectives from the goal, establish measures to the objectives, reconcile the instruction to the objectives, and finally evaluate the program against the attainment of these objectives.
Div
isio
n of
Agr
icul
tura
l Ext
ensi
on -I
AR
I
Kirkpatrick’s four levels: The first level (reactions) the assessment of learners’ reactions or
attitudes toward the learning experienceThe second level (learning) an assessment how well the learners
grasp of the instruction. Kirkpatrick suggested that a control group, a pre-test/posttest design be used to assess statistically the learning of the learners as a result of the instruction
The third level (behavior) follow-up assessment on the actual performance of the learners as a result of the instruction. It is to determine whether the skills or knowledge learned in the classroom setting are being used, and how well they are being used in job setting.
The final level (results) to assess the changes in the organization as a result of the instruction
Div
isio
n of
Agr
icul
tura
l Ext
ensi
on -I
AR
I
Four-Level Model of Evaluation (1959)
Experimental Evaluation Model (1960s):
The experimental model is a widely accepted and employed
approach to evaluation and research.
Suchman was identified as one of the originators and the strongest advocate of experimental approach to evaluation.
This approach uses such techniques as pretest/posttest, experimental group vs. control group, to evaluate the effectiveness of an educational program.
It is still popularly used today.
Div
isio
n of
Agr
icul
tura
l Ext
ensi
on -I
AR
I
Suchman
CIPP Evaluation Model (1970s): D. L. Stufflebeam
Context is about the environment in which a program would be used. This context analysis is called a needs assessment.
Input analysis is about the resources that will be used to develop the program, such as people, funds, space and equipment.
Process evaluation examines the status during the development of the program (formative)
Product evaluation that assessments on the success of the program (summative)
Div
isio
n of
Agr
icul
tura
l Ext
ensi
on -I
AR
I
Stufflebean’s CIPP evaluation model was the most influential model in the 1970s.
(Reiser & Dempsey,2002)
Descriptive Models
They are more general in that they describe the theories that undergird prescriptive models (Alkin & Ellett, 1990)
Examples:Patton: Qualitative Evaluation Model (1980s)Stake: Responsive Evaluation Model (1990s)Hlynka, Belland, & Yeaman: Postmodern Evaluation
Model (1990s)
Div
isio
n of
Agr
icul
tura
l Ext
ensi
on -I
AR
I
Qualitative Evaluation Model (1980s)
Michael Quinn Patton, Professor, Union Institute and University &Former President of the American Evaluation Association
Div
isio
n of
Agr
icul
tura
l Ext
ensi
on -I
AR
I
Qualitative Evaluation Model
Patton’s model emphases the qualitative methods, such as observations, case studies, interviews, and document analysis.
Critics of the model claim that qualitative approaches are too subjective and results will be biased.
However, qualitative approach in this model is accepted
and used by many ID models, such as Dick & Carey model.
Div
isio
n of
Agr
icul
tura
l Ext
ensi
on -I
AR
I
Responsive Evaluation Model (1990s)
Robert E. Stake He has been active in the
program evaluation profession
He took up a qualitative perspective, particularly case study methods, in order to represent the complexity of evaluation study
Div
isio
n of
Agr
icul
tura
l Ext
ensi
on -I
AR
I
Responsive Evaluation Model
It emphasizes the issues, language, contexts, and standards of stakeholders
Stakeholders: administrators, teachers, students, parents, developers, evaluators…
His methods are negotiated by the stakeholders in the evaluation during the development
Evaluators try to expose the subjectivity of their judgment as other stakeholders
The continuous nature of observation and reporting
Div
isio
n of
Agr
icul
tura
l Ext
ensi
on -I
AR
I
Responsive Evaluation Model
This model is criticized for its subjectivity.His response: subjectivity is inherent in any evaluation or
measurement. Evaluators endeavor to expose the origins of their
subjectivity while other types of evaluation may disguise their subjectivity by using so-called objective tests and experimental designs
Div
isio
n of
Agr
icul
tura
l Ext
ensi
on -I
AR
I
Fourth generation modelE.G. Guba
S. Lincoln
Div
isio
n of
Agr
icul
tura
l Ext
ensi
on -I
AR
I
Fourth generation model Seven principles that underlie their model (constructive
perspective) 1. Evaluation is a social political process2. Evaluation is a collaborative process3. Evaluation is a teaching/learning process4. Evaluation is a continuous, recursive, and highly
divergent process5. Evaluation is an emergent process6. Evaluation is a process with unpredictable outcomes7. Evaluation is a process that creates reality
Div
isio
n of
Agr
icul
tura
l Ext
ensi
on -I
AR
I
Fourth generation modelOutcome of evaluation is rich, thick description based on
extended observation and careful reflectionThey recommend negotiation strategies for reaching
consensus about the purposes, methods, and outcomes of evaluation
Div
isio
n of
Agr
icul
tura
l Ext
ensi
on -I
AR
I
Div
isio
n of
Agr
icul
tura
l Ext
ensi
on -I
AR
I
Expert modelGoal free model Objective attainment model Management decision model Naturalistic model Experimental modelParticipatory evaluation model
Other Major model for programme evaluation
Div
isio
n of
Agr
icul
tura
l Ext
ensi
on -I
AR
I
based on expert judgment In this case documentation is prepared in advance of expert
visitThe expert interview, analyze document and make judgment
and comparisons well educated, in the field of social science
Draw back ; expert are outsiders to a programme and because of their ignorance on context of the situation , it is easy for project officials to dupe them
Expert model
Div
isio
n of
Agr
icul
tura
l Ext
ensi
on -I
AR
I
study what is actually happening to farmer’s interest regardless of stated goals and intensions
Open – ended interviewing and observations by persons who do not have a vested interest in the programme.
It is assumed that outside evaluators do not know, what the programme has intended to accomplish.
The evaluator identifies environmental and farming conditions these needs with what people are actually experiencing as a result of the extension interventions. The gap is thus viewed as a starting point for making changes in the programme
Goal free model
Div
isio
n of
Agr
icul
tura
l Ext
ensi
on -I
AR
I
Success of programme can be determined by measuring programme's outcomes against its own goals and objectives.
This process begins with clarifying measurable objectives and then gathering data that will validate the extent to which the objectives have been met.
programme goal is quit low, so that outcomes will be met easily, making it appear successful
Objective attainment model
Div
isio
n of
Agr
icul
tura
l Ext
ensi
on -I
AR
I
Major goal of evaluation is to understand how programme is operating in its natural environment
Important requirements is data should be collated and analyzed from multiple perspective
Out come of this evaluation dialogue regarding disagreement among objective, policies, procedures and actives.
Conflict resolution skill are combined with evaluation , many positive collaborative changes can be made through this model of evaluation
It can diagnose the causes behavior on part of some farmers, or development actor
Naturalistic model
Div
isio
n of
Agr
icul
tura
l Ext
ensi
on -I
AR
I
Evaluation is to provide relevant information as a management tool to decision maker
Evaluation should be in line with decision during programme initiation and operation stage to make result relevant for each stage.
This emphasis is on participation of stockholder because evaluation should serve on decisions
There is tendency for the decision of stockholder to be viewed as more important than those of various types of farmers who may no benefit directly from such decisions
Management decision model
Div
isio
n of
Agr
icul
tura
l Ext
ensi
on -I
AR
I
Evaluations is to determine whether changes in programme outcomes were due to the contribution of the programme and not because of other influences.
Comparing a group that received this treatment and a group that did not.
Limitation of this model is that programme accessibility is withheld from those learners who serve as a control group and there is difficulty in controlling external influences and it is extremely difficult and costly to operationalize.
This model should be used only when major changes are expected or when major failure is anticipated in pilot effort
Experimental model
Participatory evaluation model 4Rs: Respect + Relevance + Reciprocity + Responsibility Involvement of key representatives from local communities in
all steps of program evaluation (from the beginning to the end) Continuous learning and empowerment process for all
involved Important to assure the development of culturally relevant and
meaningful indicators and outcomes Regular feed-back from the target population and the
communityPartnership approach - collaboration – consultationApproach –relationship between evaluator and key
stakeholders
Div
isio
n of
Agr
icul
tura
l Ext
ensi
on -I
AR
I
Div
isio
n of
Agr
icul
tura
l Ext
ensi
on -I
AR
I
Govt to evaluate social and educational programmes. In the programme, evaluation model assessment is structured in a hierarchical
fashion composed of seven levels as follows
Program evaluation model (bennetts hierarchy)
Program evaluation hierarchy Decription
Level 7 End results Impact on overall long term goal
Levels 6 Practice change Measures changes in behaviour
Levels 5 Kosa change Measures changes in knowledge opinions skills and aspirations skills and aspirations of programme participates
Levels 4 Reactions Measures perceptions of programme participants about the programme
Levels 3 outputs Measures activities completed and products developed
Levels 2 Activities Measures what programme offers or do
Levels 1 Inputs Measures resources used in the programme (staff, budget and time)
MGNREGA
It has set a base price for labour in rural areas, improved the bargaining power of labourers and has led to a widespread increase in the cost of unskilled and temporary labour including agricultural labour.
At the national level, with the average nominal wage paid under the scheme increasing from Rs 69 in FY 2006‐07 to Rs 193 in FY 2015-16, the bargaining power of agricultural labour has increased as even private sector wages have increased (nrega.nic.in)
Initiated in 2006, Enhancing livelihood security of households in rural areas of the country by providing 100 days of employment
Launched in 200 selected districts in 2006Extended to 440 additional districts in
2015
Div
isio
n of
Agr
icul
tura
l Ext
ensi
on -I
AR
I
Coverage under MGNREGA…
States & Union territories 35
Total No. of Districts 661
Total No. of Blocks 6,860
Total No. of GPs 2,62,269
Total No. of Job Cards[In Cr] 13.36
nrega.nic.in 2016
Div
isio
n of
Agr
icul
tura
l Ext
ensi
on -I
AR
I
Work Category PercentageRural Sanitation 36.30%
Rural Connectivity 16.00%
Drought Proofing 13.60%
Water Conservation & Harvesting 11.30%
Land Development 7.70%
Micro Irrigation Works 5.20%
Renovation of Traditional Water Bodies 3.60%
Flood Control Protection 3.00%
% of Expenditure on Agriculture & Agriculture Allied Works
FY 2012-13: 61.01%FY2013-14 : 57.73%FY 2014-15: 60.14%
nrega.nic.in
• Increase in wage rate• Shortage of labour to agriculture• Diversification/migration• Inflation• Rising cost of living
Div
isio
n of
Agr
icul
tura
l Ext
ensi
on -I
AR
I
Div
isio
n of
Agr
icul
tura
l Ext
ensi
on -I
AR
I
Deals with the inputs such as finacial and human resourse being used for the programme that are required to make the practice change
Level 1- Input
Level 2- activities of MGNREGA Level two of bennett’s hierarchy deals with taking activities under the
MGNREGAWork Category Percentage
Rural Sanitation 36.30%
Rural Connectivity 16.00%
Drought Proofing 13.60%
Water Conservation & Harvesting 11.30%
Land Development 7.70%
Micro Irrigation Works 5.20%
Renovation of Traditional Water Bodies 3.60%
Flood Control Protection 3.00%
% of Expenditure on Agriculture & Agriculture Allied Works FY 2012-13: 61.01%
FY2013-14 : 57.73%
FY 2014-15: 60.14%
nrega.nic.in
Level- 3 Measures activities completed and products developed
Div
isio
n of
Agr
icul
tura
l Ext
ensi
on -I
AR
I
Div
isio
n of
Agr
icul
tura
l Ext
ensi
on -I
AR
I
Div
isio
n of
Agr
icul
tura
l Ext
ensi
on -I
AR
I
Effectiveness of agricultural technology management agency by Lenin V in 2009
Effectiveness index =PIY+PIR+PII 3Where PIY= percentage increase in yield PIR= percentage increase in return PII= percentage increase in income Proposed by Gupta(1992)
Div
isio
n of
Agr
icul
tura
l Ext
ensi
on -I
AR
IImplementation Of The National Rural
Employment Guarantee Programme In West Bengal: A Critical Analysis by Shubhadeep Roy in 2010
Impact of NREGA on the empowerment of the beneficiaries in WB
Significant positive changes were found in level of aspiration, self confidence and self reliance of respondent after commencement of NREGA
Paired t test are used
Div
isio
n of
Agr
icul
tura
l Ext
ensi
on -I
AR
IImpact of MGNREGA on the livelihood
security of the beneficiaries in WB significant changes were found security,
income security, habitat security, health security, and environmental security after working under NREGA
82% respondent fallen under medium livelihood security after NREGA
80.5% respondent were found to fallen under low livelihood category before NREGA
Div
ision
of A
gricu
ltura
l Ext
ensio
n -IA
RI
Div
ision
of A
gricu
ltura
l Ext
ensio
n -IA
RI
Div
ision
of A
gricu
ltura
l Ext
ensio
n -IA
RI
Rise in real casual labourer wages is due to MGNREGA, with estimates ranging from
4% to 8% (Berg et al 2012)
Between 1999 and 2005, pre‐MGNREGA, nominal wages in the rural economy grew
at an average annual rate of 2.7 per cent. Post‐MGNREGA, average wage increases
almost quadrupled to 9.7 per cent between 2006 and 2009-10.
Kumar, 2012
Rise in casual wage rates cannot be wholly attributed to MGNREGA (Dutta et al, 2012)
Case studies…
Primarily AP, Rajasthan and TN, which have a higher proportion of labour force engaged under MGNREGA have experienced a higher growth in real farm wages
Jain, S. (2013) “Rising Farm Wages in India: The Pull and Push Factors”, Commission For Agricultural Costs And Prices, Discussion Paper
Case study…
The wage rate under MGNREGA has been revised by the Central Government in most of the states and it goes way above Rs. 100 per day and ranges between Rs. 120 and Rs. 179 per day depending upon the states.
Compared to the farm wage rate, the wage rate notified under MGNREGA was relatively higher in major states during 2007‐08.
Div
ision
of A
gricu
ltura
l Ext
ensio
n -IA
RI
Knowing is not enough; we must apply. Willing is not enough; we must
do.-- Goethe
To conclude…
REFERENCES Dick, W. (2002). Evaluation in instructional design: the impact
of Kirkpatrick’s four-level model. In Reiser, R.A., & Dempsey, J.V. (Eds.). Trends and issues in instructional design and technology. New Jersey: Merrill Prentice Hall.
Dick, W., & Carey, L. (1990). The systematic design of instruction. Florida: HarperCollinsPublishers.
Reeves, T. & Hedberg, J. (2003). Interactive Learning Systems Evaluation. Educational Technology Publications.
Reiser, R.A. (2002). A history of instructional design and technology. In Reiser, R.A., & Dempsey, J.V. (Eds.). Trends and issues in instructional design and technology. New Jersey: Merrill Prentice Hall.
Stake, R.E. (1990). Responsive Evaluation. In Walberg, H.J. & Haetel, G.D.(Eds.), The international encyclopedia of educational evaluation (pp.75-77). New York: Pergamon Press.
Principles for EvaluationGuiding Principles for Evaluators: A Report from the AEA Task Force on Guiding Principles for Evaluators
http://www.eval.org/EvaluationDocuments/aeaprin6.html