Knowing When and How to Conduct an Impact Evaluation

Download Knowing When and How to Conduct an Impact Evaluation

Post on 14-Jan-2016

44 views

Category:

Documents

3 download

DESCRIPTION

Knowing When and How to Conduct an Impact Evaluation. Anita Singh, PhD, RD USDA, Food and Nutrition Service Office of Research and Analysis Arizona Nutrition Network Partners Meeting January 28, 2010. Session Objectives. Learn about the different types of evaluations - PowerPoint PPT Presentation

TRANSCRIPT

  • Anita Singh, PhD, RDUSDA, Food and Nutrition ServiceOffice of Research and Analysis

    Arizona Nutrition Network Partners MeetingJanuary 28, 2010

  • Session Objectives Learn about the different types of evaluationsUnderstand the importance of formative and process evaluationUnderstand the difference between outcome measures and an outcome evaluation. Learn how to conduct an impact evaluation

  • Why Evaluate?To obtain ongoing, systematic information about a project. For Project:ManagementEfficiencyAccountability

  • Types of EvaluationFormative

    Process

    Outcome

    Impact

  • Formative EvaluationTypically occurs when an intervention is being developed.Results used in designing interventionResults are informative not definitiveExamples focus groups, literature review etc.

  • Process EvaluationTracking the actual implementation (e.g. delivery, resources)Used to determine if intervention was delivered as designedHelps identify barriers to implementation and strategies to overcome barriers

  • Outcome EvaluationAddresses whether anticipated changes occurred in conjunction with the interventionExample: Pre- Post intervention test of nutrition knowledgeIndicates the degree of change but it is not conclusive evidence

  • Outcome Evaluation versus Outcome MeasureOutcome measure It is a tool for answering the question posed by each type of evaluation.

    Have outcome measures for all four types of evaluation formative, process, outcome and impact

  • Impact EvaluationAllows one to conclude authoritatively that the observed outcomes are due to the intervention

    Can draw cause and effect conclusions by isolating the intervention from other factors that might contribute to the outcome.

  • Planning for an Impact Evaluation

    IS THE INTERVENTION EVALUABLE?What are the objectives? What is the expected size of the impact? Why, how and when is the intervention expected to achieve the objectives?Will the intervention be implemented as intended?

  • Planning for an Impact EvaluationBuild on available researchEngage stakeholdersDescribe the intervention (e.g. develop a Logic Model; develop a conceptual framework)

  • Source: University of Wisconsin-Extension, Program Development and EvaluationSimplest form of logic model

    INPUTS

    OUTPUTS

    OUTCOMES

  • Source: University of Wisconsin-Extension, Program Development and EvaluationA bit more detail INPUTSOUTPUTSOUTCOMES

    Program investments

    Activities

    Participation

    Short

    Medium

    What we investWhat we doWho we reachWhat resultsSO WHAT??What is the VALUE?Long-term

  • Planning for an Impact Evaluation Study Design ConsiderationsExperimental Strongest type of design -- cause and effect, uses random assignment; cost considerationsQuasi-experimental designs does not use random assignment; can have a control group may include multiple groups and or/multiple waves of data collection

  • Planning for an Impact EvaluationPrepare the study planDevelop SMART objectivesSelect outcomes measures that fit the interventionSampling planData collection planAddress protection of human subjects IRB, privacy and confidentiality issuesData analysis plan

  • Measurement Selection IncludesKnowing the information needsUnderstanding Campaign/Intervention rationaleBasing on Theoretical model for behavior changeSelecting approach e.g. mail survey, phone, in-person interview, records reliability, response rate, costSelecting Measurement tools validity and reliability of instruments

  • Study Design Sample SizeStatistical Power based on amount of change that could be expectedOnce desired magnitude of change has been established, then select/calculate sample size with statistical power to determine if the change is due to the intervention and not random chance (see Hersey et. al.)

  • Sample Size Depends on:Difference that is expected to be detectedMeasurement toolStudy design cross-sectional versus longitudinal study

  • Other ConsiderationsResponse rate higher the response rate, the greater the likelihood that the sample is representative of the study population.Example survey 30 percent completed versus 80 percent completed.

  • Other ConsiderationsLow response rate deal with issues such as intention to treat. Intention to treat analyses are done to avoid the effects of crossover and drop-out, which may break the randomization to the treatment groups in a study. Intention to treat analysis provides information about the potential effects of treatment policy rather than on the potential effects of specific treatment.

  • Other ConsiderationsSelection bias: the sample is not truly representative of the study populationRepeated interviews/testingSample attritionIf high attrition rate - comparing pre/baseline scores of non-dropout with dropouts.May need to adjust for difference

  • Other ConsiderationsSeasonal effects Fresh fruits and vegetable consumptionMaturation ChildrenOther another campaign, health related publicity etc.

  • As the Intervention BeginsCollect impact data after start-up problems have been resolved but do not wait until implementation is wide spread; Follow-up (interest/resources)

    After the InterventionReport the findingsUse the findings

  • Current FNS Nutrition Education Evaluation ProjectsTo demonstrate that nutrition education through SNAP can bring about meaningful behavioral change.To show that nutrition education implementers can conduct meaningful intervention evaluations.Two Studies Models of SNAP-Ed and Evaluation Wave I & Wave II - Results are intended to identify an initial set of promising practices for both nutrition education and evaluation.

  • Models of SNAP-Ed and Evaluation Wave IFour demonstration projects - competitively selected.Each project has a self-evaluation component. FNS contractor will also conduct impact evaluationsBaseline data collection to begin shortly Final report expected in Fall, 2011.

  • Models of SNAP-Ed and Evaluation Wave IThe four demonstration projects are:The University of Nevada at Renos All 4 Kids intervention which targets pre-kindergarten children attending Las Vegas Head Start centers.

    The Chickasaw Nation Nutrition Services Eagle Play intervention, targets 1st through 3rd grade children in Pontotoc County, Oklahoma

  • Models of SNAP-Ed and Evaluation Wave IThe Pennsylvania State Universitys Eating Competencies web-based intervention promotes Satters eating competencies as an outcome for SNAP eligible women, ages 18-45.

    The New York State Department of Healths Eat Well, Play Hard in Childcare Settings, targets 3- to 4-year-old low-income children and their caregivers.

  • Models of SNAP-Ed and Evaluation Wave IIApplications are being reviewed; 3 will be selected. SNAP-Ed connection web site has project overview http://snap.nal.usda.gov/nal_display/index.php?info_center=15&tax_level=1

  • SummaryEvaluation can provide valuable, ongoing systematic information about a projectCommon evaluation features across delivery types Choice of features and evaluation type(s) will be driven by your information needsCost and resource considerations are important

  • Evaluation Resources

    Nutrition Education: Principles of Sound Impact Evaluation, FNS, Sept. 05 http://www.fns.usda.gov/oane/menu/Published/NutritionEducation/Files/EvaluationPrinciples.pdf

    Building capacity in Evaluating Outcomes UW Extension, Oct 08 http://www.uwex.edu/ces/pdande/evaluation/bceo/index.html

  • Resources continuedWK Kellogg Foundation Evaluation Handbook, Jan 98 http://www.wkkf.org/default.aspx?tabid=75&CID=281&NID=61&LanguageID=0

    Developing a logic model: Teaching and training guide: E. Taylor-Powell and E. Henert; UW Extension Feb 08 http://www.uwex.edu/ces/pdande/evaluation/evallogicmodel.html

  • Resources continued

    Harris et al. An introduction to qualitative research for food and nutrition professionals. J Am Diet Assoc. 2009;109:80-90

    National Collaborative on Obesity Research Policy Evaluation Webinar Series: http://www.nccor.org/ Enhancing the Usefulness of Evidence to Inform Practice

  • Resources continuedCDC on line coursehttp://www.cdc.gov/nccdphp/dnpa/socialmarketing/training/phase5/index.htm (evaluation)

    Evaluating Social Marketing in Nutrition: A Resource Manual by Hersey et. al. http://www.fns.usda.gov/oane/MENU/Published/nutritioneducation/Files/evalman-2.PDF

  • Information on FNSs Completed and Planned Research Studies

    Office of Research and Analysishttp://www.fns.usda.gov/fns/research.htm

    See 2010 Study and Evaluation PlansIdentifying High-Performance Nutrition Promotion Strategies

    ***

    ***************

    ****************

    *

Recommended

View more >