promising practices - may 2011 - evaluation article 1

Upload: bygsky

Post on 08-Apr-2018

216 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/6/2019 Promising Practices - May 2011 - Evaluation Article 1

    1/5

    Promising PracticesIn 4-H Science

    Katherine E. Heck & Martin H. SmithUniversity o Caliornia, Division o Agriculture and Natural Resources

    4-H Youth Development Program

    Why evaluate?

    4-H Science programs provide opportunities or young people to learn science content, improve their scienceprocess skills, and develop positive attitudes toward science. Tis is accomplished within a positive youth devel-opment program structure that allows youth to orm positive relationships and to grow as individuals. Evalua-tions can provide important inormation about 4-H Scienceprogramming, including whether or not, or to what degreeoutcomes have been achieved; areas necessary or programimprovement; and to demonstrate to unders the value o aparticular program.

    Evaluations may include ormative or summative elements,or both. Formative evaluations are primarily concerned withprocess and how well a program worked, and the inormationgained is typically used to eed back into program improve-ments. Summative evaluations ocus on the outcomes o theprogram; or example, they might attempt to determine whatskills the young people learned.

    Evaluation of 4-H Science Programming

    May 2011

    4-H science education programs help increase youth sc

    literacy in nonormal educational settings to improve acontent knowledge, and science process skills.

    2011 National 4-H Council

    Te 4-H name and emblem are protected under 18 USC 707.

  • 8/6/2019 Promising Practices - May 2011 - Evaluation Article 1

    2/5

    Formative evaluations: Evaluation of process

    Many evaluations o out-o-school time programming have been published. Research has demonstrated thatormative evaluations can improve the quality o programming (Brown & Kiernan, 2001). Oen conducted inthe early stages o program implementation, ormative evaluations typically use observational data, surveys, and/or interviews to determine whether the program is being implemented in accordance with its stated goals, and

    whether it adheres to a positive youth development ramework. Participant satisaction is one element com-monly addressed. Formative evaluations could identiy challenges that may help the program to improve. Suchchallenges might involve structural issues (such as problems with attendance), sta development needs, issues ocollaboration or other concerns (Scott-Little, Hamann, & Jurs, 2002). Additionally, ormative evaluation may beused in 4-H Science programs to see how closely they adhere to the recommended 4-H Science checklist, whichstates that programs should use inquiry strategies and an experiential learning approach, and should be deliveredby trained and caring adults who include youth as partners.

    Summative evaluations: Evaluation of outcomes

    A summative evaluation seeks to determine whether, or to whatdegree, youth who participate in a particular program achieve tar-geted outcomes. Summative evaluations are typically conducted onestablished programs, rather than in the frst ew months o programimplementation. Tey seek to identiy the impacts programs have ontheir participants.

    Te 4-H Science Initiative promotes the acquisition o a specifc set oscience process skills within a ramework o positive youth develop-ment and based on the National Science Education Standards. Youth

    who participate in 4-H Science programming are expected to improvetheir Science Abilities, a group o science process skills. Tese 30abilities, which include skills such as hypothesizing, researching aproblem, collecting data, interpreting inormation, and developingsolutions, are delineated on the 4-H Science Checklist, available athttps://sites.google.com/site/4hsetonline/liaison-documents/4-HSE-Checklist2009.pd. Youth in a 4-H Science-Ready program are alsoexpected to have the opportunity to develop mastery and indepen-dence, to be able to contribute and eel generosity, and to eel a senseo belonging within the group. Te acquisition o science content, thedevelopment o science process skills, and youth development out-comes are all possibilities or summative evaluation.

    4-H science education programs help increase youth sc

    literacy in nonormal educational settings to improve acontent knowledge, and science process skills.

  • 8/6/2019 Promising Practices - May 2011 - Evaluation Article 1

    3/5

    Evaluation methods

    Evaluation o science programs can take a variety o orms, such as observation o the program, surveys oparticipants, interviews with program sta or volunteers, or ocus groups with participants. Each method has ad-vantages and disadvantages, and selection o the appropriate method will depend on the goals o the evaluation.

    An initial interview with the program director is important in determining the goals o the program, whichis the frst step in identiying the most appropriate methods or evaluation. I the program has a logic model,or can develop one, that can help clariy the goals and outcomes to be evaluated. Te evaluation plan needs tobe specifc and appropriate to the program that is being evaluated.

    Individual interviews or ocus group interviews with sta, volunteers, or participants may provide insightsinto process issues that could be useul in ormative or summative evaluations.

    Observation o a program can provide useul inormationor a ormative evaluation in determining how well the pro-gram delivery is working. Observers need to be preparedin advance or what to look or and how to note or codethe interactions or other behaviors. One potential resourceor observational data collection ocusing on quality oprogramming is the Out-o-School ime (OS) EvaluationInstrument developed by Policy Studies Associates avail-able rom http://www.policystudies.com/studies/?id=30 .

    Surveys o program participants are one o the most com-mon methods or evaluation. Surveys, particularly thoseinvolving quantitative data, have the advantage o typically

    being quicker and simpler, and oen less costly, to com-plete than qualitative methods. raditional survey datacollection was done with paper and pencil surveys, but neworms that may be used include online surveys or groupsurveys done with computer assistance.

    Authentic assessment is another method o evaluation. Inauthentic assessment, learners are asked to demonstratetheir knowledge and skills through real-world tasks (Palm,2008). Authentic assessments can be utilized or ormativeor summative purposes and can be used independently or in conjunction with more traditional assessment

    strategies such as surveys, observations, or interviews. With respect to 4-H Science, authentic assessmentsstrategies such as learners responses to open-ended questions, written data presented in a science notebook,developing, conducting, and explaining the results o an experiment, or designing and building a modelbridge can be used to assess youths understanding o specifc science content and the development o Sci-ence Abilities. Te activities or results are then coded independently by the evaluators.

    4-H science education programs help increase youth sc

    literacy in nonormal educational settings to improve acontent knowledge, and science process skills.

  • 8/6/2019 Promising Practices - May 2011 - Evaluation Article 1

    4/5

    Best practices in science program evaluation

    Evaluation tools need to be piloted prior to implementation to ensure that they are able to be used and under-stood, and to identiy and correct any problems. Ideally the items on a survey or interview should be validatedand the reliability o the instrument ascertained. Te reliability o an instrument is the extent to which theinstrument achieves consistent results, which may include consistency among the participants or among the

    individuals who are coding or rating the responses. Te validity is a determination o whether the instrumentis measuring what it is intended to measure; whether the questions accurately and ully provide a picture o theconcepts or constructs intended.

    Tere are several considerations to take into account when identiying a particular evaluation method and anapropriate tool to use. Some o these might include: Te age or abilities o the potential respondents. For example, paper and pencil surveys or surveys with com-

    plex questions are not appropriate or very young children. Te number o respondents. Large numbers o respondents are oen more suitable or quantitative meth-

    ods, research that involves data that are typically measured numerically, with objective questions, oenusing multiple choice or other fxed response categories. Small numbers o participants may make qualita-

    tive methods more appropriate and easible. Qualitative research involves a text-based method o developingthemes, theories, and ideas rom observations, short or essay answers, ocus groups, or interviews.

    Te amount o time available or completing the evaluation. Some methods are quicker to complete than oth-ers.

    Whether the evaluation should be longitudinal and ollow participants over time, or whether a point in time,cross-sectional evaluation is appropriate.

    How large a burden the evaluation needs to place on participants in terms o time they will need to spendresponding to a survey or participating in an interview or ocus group.

    What specifc outcomes or processes the evaluation will be ascertaining.

    When developing new instruments or questions, it is important to try to prevent bias in the survey that mayresult rom sampling issues or rom the wording o particular questions. Bias results when the survey or sam-pling design causes error in measurement. Participants in evaluation surveys should ideally be either the entireprogram or a representative sample o all program participants. Questions should be worded to be as objective aspossible, and written in a way that respondents are able to answer accurately. (For example, surveys asking aboutpast behavior may be answered inaccurately i the respondent cannot recall behavior rom a long time ago.)

    Some states, such as ennessee and Louisiana, have begun routinely collecting and summarizing evaluation datarom program participants. ennessee has developed a program evaluation network that includes a limited seto evaluation items that are completed by all youth participating in 4-H programming in the state. Te specifcquestions that youth respond to vary depending on the projects in which they are engaged. Te questions in the

    survey bank were drawn rom previously validated tools, such as, or science programming, the Science ProcessSkills Inventory (available at http://www.pearweb.org/atis/tools/18 ).

    4-H science education programs help increase youth sc

    literacy in nonormal educational settings to improve acontent knowledge, and science process skills.

  • 8/6/2019 Promising Practices - May 2011 - Evaluation Article 1

    5/5

    4-H science education programs help increase youth sc

    literacy in nonormal educational settings to improve acontent knowledge, and science process skills.

    Resources for science program evaluators

    Tere are many pre-existing validated, reliable surveys that can be used ocusing on science content and scienceprocess skills. One useul resource or these tools is the Assessment ools in Inormal Science (AIS) website,maintained by Harvard Universitys Program in Education, Aerschool and Resiliency. Tis website is located athttp://www.pearweb.org/atis. Previously validated instruments are available or a range o participant ages, rom

    preschool through college, and ocusing on a range o science outcomes, rom skills to interest to content knowl-edge. In addition to AIS, there are a bank o instruments on science and other youth development outcomesavailable at the CYFERNE website, at https://cyernetsearch.org/ilm_common_measures . Tese instrumentsare used in the evaluation o National Institute o Food and Agriculture (NIFA)-unded Children, Youth, andFamilies At Risk (CYFAR) projects, and were selected rom pre-existing, validated instruments on a variety otopics, including science and technology but also demographics, youth program quality, leadership, citizenship,nutrition, parenting, workorce preparation, and others.

    In addition to pre-existing validated tools, several journals also exist which publish articles that provide guidanceon methodological issues around program evaluation. Among these are Practical Assessment, Research & Evalu-ation (http://pareonline.net/); the American Journal o Evaluation (http://aje.sagepub.com/); Evaluation Review

    (http://erx.sagepub.com/); and Evaluation (http://www.uk.sagepub.com/journals/Journal200757).

    References

    Brown, J.L., & Kiernan, N.E. (2001). Assessing the subsequent eect o a ormative evaluation on a program.Evaluation and Program Planning, 24(2), 129-143.

    Palm, . (2008). Perormance assessment and authentic assessment: A conceptual analysis o the literature.Practical Assessment Research & Evaluation, 13(4). Available online: http://pareonline.net/getvn.asp?v=13&n=4

    Scott-Little, C., Hamann, M.S., & Jurs, S.G. (2002). Evaluations o aer-school programs: A meta-evaluation o

    methodologies and narrative synthesis o fndings. American Journal o Evaluation, 23(4), 387-419.

    National 4-H Science Proessional Development: Janet Golden, [email protected], 301-961-2982Graphic Design and Layout: Marc Larsen-Hallock, UC Davis, State 4-H O ce