handbook for coordinators · 2014. 10. 16. · safe and drug-free schools handbook for coordinators...

124
SAFE AND DRUG-FREE SCHOOLS HANDBOOK FOR COORDINATORS UNDERSTANDING THE PROCESS OF PROGRAM EVALUATION 2003 Developed for University of North Florida, Florida Institute of Education, Safe, Disciplined, and Drug-Free Schools Project by Meena Harris, MS

Upload: others

Post on 20-Aug-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

  • SAFE AND DRUG-FREE SCHOOLS

    HANDBOOK FORCOORDINATORS

    UNDERSTANDING THE PROCESSOF PROGRAM EVALUATION

    2003

    Developed forUniversity of North Florida,

    Florida Institute of Education,Safe, Disciplined, and Drug-Free Schools Project

    by Meena Harris, MS

  • Special appreciation to reviewers from within the field:

    Florida Institute of Education reviewers: Ravinder Singh, Tonya Milton, PatriciaElton, Leonard Everett, and John Masterson;

    Staff reviewers from the Florida Department of Education, Bureau of School Safetyand School Support, Office of Safe Schools. Staff members: Felicia Elliot, ShellyHatton, and Penny Detscher; and

    Additional thanks to Meena Harris, MS , Research and Evaluation Consultant ofParks and Owenby Consulting, Inc. for writing and designing the handbook.

  • TABLE OF CONTENTSABOUT THIS HANDBOOK .......................................................... v

    OVERVIEW: EVALUATING THE EFFECTIVENESS OFPREVENTION PROGRAMS .................................................... 1

    WHAT IS PROGRAM EVALUATION? ......................................................... 1Asking Relevant Questions ............................................................ 1More than Matter of Opinion ........................................................ 1Need for Reporting Accurate Results ............................................ 2

    HOW DO WE KNOW OUR PREVENTION PROGRAMS AREACTUALLY CHANGING THE BEHAVIORS OF OUR YOUTH? ..................... 2

    Effects Are Not Always Immediately Apparent ............................ 2Support of Scientifically Based Research, in General ................. 3Demonstrate Evidence of Program Effects, in Specific ............... 3

    GETTING STARTED ............................................................................... 4THINGS TO REMEMBER ......................................................................... 5

    PHASE I: PREPARING FOR THE EVALUATION ...........................7STEP 1: SELECT A PROGRAM TO EVALUATE............................................. 8STEP 2: IDENTIFY KEY STAKEHOLDERS ............................................... 10STEP 3: ESTABLISH AN EVALUATION TEAM .......................................... 12STEP 4: CONSIDER A BUDGET ............................................................. 14STEP 5: KEEP A RECORD OF ALL EVALUATION PROJECT ACTIVITIES ...... 15STEP 6: ESTABLISH AN INITIALTIMELINE ............................................. 16THINGS TO REMEMBER ....................................................................... 17

    PHASE II: CHOOSING THE RIGHT EVALUATION QUESTIONS .19STEP 1: DEFINE THE PROGRAM ........................................................... 20

    Task: Describe District Background Information. ..................... 21Task: Delineate a Program Overview .......................................... 21Task: Outline Program Goals and Objectives ............................. 25

    STEP 2: SET THE SCOPE OF THE EVALUATION PROJECT ......................... 27Task: Set Specific Evaluation Goals for the District

    Evaluation Project ................................................................... 27Task: Envision the Major Aspects of the Evaluation

    Project ....................................................................................... 29STEP 3: DEVELOP GENERAL EVALUATION QUESTIONS .......................... 30

    Task: Review Previous Evaluation Research on theSame or Similar Programs ...................................................... 30

    Task: Ask Questions About Program Objectives ThatAre Relevant to the Goals of the Evaluation.......................... 30

    Task: Select Which Questions to Evaluate ................................. 31Task: Note Questions Not Selected ............................................. 33

    i

  • ii Table of Contents

    Safe and Drug-Free Schools

    STEP 4: WRITE SPECIFIC EVALUATION QUESTIONS ............................... 34Task: Develop at Least One Specific Evaluation Question

    for Each General Question ...................................................... 34THINGS TO REMEMBER ....................................................................... 38

    PHASE III: DESIGNING A DATA COLLECTION PLAN .............. 39STEP 1: DETERMINE WHAT DATA IS NEEDED TO ANSWER

    THE EVALUATION QUESTIONS ......................................................... 41Task: Create Measures of Program Implementation

    (Program Processes). ................................................................ 41Task: Create Measures of Participant Outcomes ....................... 43Task: Build Baseline Standards for Comparison into

    the Design Plan ........................................................................ 46STEP 2: DETERMINE WHERE TO FIND THE BEST SOURCE OF DATA TO

    ANSWER EVALUATION QUESTIONS .................................................. 48STEP 3: DETERMINE HOW TO COLLECT THE DATA ................................ 49

    Task: Determine What Type of Procedure is Best Suited toCollect Evidence for Questions of Implementation ............... 49

    Task: Determine What Type of Procedure is Best Suited toCollect Evidence for Questions About Program OutcomeObjectives.................................................................................. 52

    STEP 4: DETERMINE HOW MUCH DATA TO COLLECT ............................ 53Task: Select a Sample Size, If Necessary ................................... 54Task: Keep Sampling Selections Congruent Across

    Data Sources............................................................................. 55STEP 5: MAKE A DATA ANALYSIS PLAN BEFORE DATA COLLECTION

    BEGINS ....................................................................................... 56STEP 6: DETERMINE WHEN TO COLLECT THE DATA ............................. 57

    Task: Decide How Often Data Should Be Collected for EachEvaluation Question ................................................................ 58

    STEP 7: ATTEND TO COLLECTION ISSUES ............................................. 59Task: Responsibilities to Respondents ........................................ 59Task: Manage and Organize the Procedures

    of Data Collection ..................................................................... 59THINGS TO REMEMBER ....................................................................... 62

    PHASE IV: ANALYZING AND INTERPRETING THE DATA......... 63STEP 1: CLEAN THE RAW DATA ........................................................... 64STEP 2: ANALYZE PROGRAM IMPLEMENTATION DATA .............................. 65

    Task: Code and Categorize Raw Data ......................................... 65Task: Conduct a Descriptive Analysis of Raw Data ................... 66Task: Answer Evaluation Questions and Look for

    Emerging Patterns ................................................................... 67

  • Table of Contents iii

    Handbook for Coordinators

    STEP 3: ANALYZE DATA RELATING TO PARTICIPANT OUTCOMES ............. 68Task: Code Data ............................................................................ 68Task: Conduct Descriptive Analysis on Raw Data ..................... 69Task: Answer Evaluation Questions and Look for

    Emerging Patterns ................................................................... 69Task: Perform Statistical Tests of Significance

    When Appropriate .................................................................... 70STEP 4: INTEGRATE AND SYNTHESIZE FINDINGS OF THE

    INITIAL ANALYSIS ......................................................................... 71Task: Build Summary Tables and Graphs of Findings .............. 71Task: Decide What Information is Relevant ............................... 72

    THINGS TO REMEMBER ....................................................................... 73

    PHASE V: REPORTING THE RESULTS OF YOUR EVALUATION ..74STEP 1: DECIDE WHAT IS RELEVANT TO REPORT ................................. 75STEP 2: WRITE THE FINAL REPORT ..................................................... 76THINGS TO REMEMBER ...................................................................... 80

    APPENDICES ............................................................................ 81APPENDIX 1: PRINCIPLES OF EFFECTIVENESS ....................................... 83

    State Definition of Scientifically Based Research ...................... 84APPENDIX 2: HOW TO HIRE AN EVALUATOR ......................................... 85

    Determining the Evaluator’s Role ............................................... 85Where to Look for an Evaluator .................................................. 86When to Hire An Evaluator ......................................................... 86Request for Proposal (If Necessary) ............................................ 87The Selection Process ................................................................... 88Responsibilities to be Included in a Contract ............................. 89

    APPENDIX 3: USEFUL SECONDARY DATA SOURCES ................................ 90Levels of Data for Selected Surveys ............................................ 90Florida Youth Survey Effort. ........................................................ 91Content of Selected Youth Surveys.............................................. 92Data Sources for State Goals: ATOD Prevention ....................... 93Data Sources for State Goals: Violence Prevention ................... 94

    APPENDIX 4: LIST OF SUPPLEMENTAL RESOURCES ................................ 95Books ............................................................................................. 95Online Evaluation Guides ............................................................ 96Online Prevention Program Evaluation Topics and Websites ... 97General Evaluation Topics and Websites .................................... 97

    APPENDIX 5: EVALUATION ACTIVITY WORKSHEETS................................ 99

    REFERENCES........................................................................... 113

  • iv Table of Contents

    Safe and Drug-Free Schools

    LIST OF TABLES AND WORKSHEETS

    WORKSHEET 1.1 NOTES FOR PROGRAM SELECTION ........................................... 9WORKSHEET 1.2 IDENTIFY STAKEHOLDERS ..................................................... 11WORKSHEET 1.3 TIMETABLE DURING PHASE I ............................................... 16TABLE 2.1 PROGRAM IMPLEMENTATION OBJECTIVES ............................... 23TABLE 2.2 PROGRAM PARTICIPANT OUTCOME OBJECTIVES ...................... 24WORKSHEET 2.1 DEFINING KEY ASPECTS OF YOUR PROGRAM ......................... 26WORKSHEET 2.2 SETTING THE SCOPE OF YOUR EVALUATION PROJECT ............. 28TABLE 2.3 EXAMPLES OF GENERAL QUESTIONS ..................................... 31WORKSHEET 2.3 SELECTED EVALUATION QUESTIONS ...................................... 32WORKSHEET 2.4 EVALUATION QUESTIONS NOT SELECTED............................... 33TABLE 2.4 EXAMPLES OF MEASURABLE QUESTIONS................................ 36WORKSHEET 2.5 WRITING YOUR SPECIFIC MEASURABLE QUESTIONS’ .............. 37TABLE 3.1 MEASURING PROGRAM IMPLEMENTATION OBJECTIVES ............ 42TABLE 3.2 MEASURING PARTICIPANT OUTCOME OBJECTIVES .................. 45TABLE 3.3 WAYS TO MAKE RELEVANT COMPARISONS ............................. 46TABLE 3.4 MOST COMMON COLLECTION TECHNIQUES FOR QUESTIONS

    OF PROGRAM IMPLEMENTATION ............................................. 50TABLE 3.5 MOST COMMON COLLECTION TECHNIQUES FOR QUESTIONS

    OF PROGRAM OUTCOMES ...................................................... 51WORKSHEET 3.1 TIMETABLE OF COLLECTION ACTIVITIES ................................ 57WORKSHEET 3.2 EVALUATION DESIGN MATRIX .......................................... 60-61WORKSHEET 4.1 SUMMARY OF FINDINGS ........................................................ 71APPENDIX 1 HOW TO HIRE AN EVALUATOR .............................................. 85APPENDIX 3 LEVELS OF DATA FOR SELECTED YOUTH SURVEYS .................. 90APPENDIX 3 FLORIDA YOUTH SURVEY EFFORT ......................................... 91APPENDIX 3 CONTENT OF SELECTED YOUTH SURVEYS .............................. 92APPENDIX 3 DATA SOURCES FOR STATE GOALS: ATOD PREVENTION ......... 93APPENDIX 3 DATA SOURCES FOR STATE GOALS: VIOLENCE PREVENTION .... 94APPENDIX 5 A COMPLETE SET OF ALL EVALUATION WORKSHEETS ............. 99

  • Table of Contents v

    Handbook for Coordinators

    ABOUT THIS HANDBOOK

    As mandated by the Principles of Effectiveness as found in Title IV, 21st Cen-tury Schools, Part A – Safe and Drug-Free Schools and Communities, SEC.4115,of the No Child Left Behind Act of 2001, local education agencies must periodi-cally evaluate the accomplishments of their funded prevention programs inorder to demonstrate a substantial likelihood of success.

    Program evaluation is a critical step to ensuring that the programs we offeryouth are effective in reducing drug use and violence. Further, program evalu-ation activities are a mechanism providing local education agencies with feed-back on progress of program delivery and outcome objectives. Informationgathered from a quality evaluation allows districts to make relevant and mean-ingful improvements to their programs in particular and to their preventionpolicies more broadly.

    This handbook has been provided for the Safe and Drug-Free Schools Project(SDFS) Coordinators in order to clarify the scope of the Florida SDFS evaluationeffort. Regulatory procedures already require districts to implement programssupported by scientifically based research; what is unknown is whether theseimplemented programs do produce a substantial likelihood of success withindistrict-specific situations. SDFS Coordinators must focus their evaluation ef-forts on the way in which a program(s) has been implemented within their schooldistrict, given the student population targeted, the resources available, and othercircumstances or conditions specific to that district. Of primary importance isthe need to know if a particular program is producing positive results for thatdistrict.

    This handbook is designed to:

    Offer SDFS Coordinators who are usingoutside evaluator services with a means toboth understand what the evaluator is doing,and to stay involved in the evaluation process.

    Offer SDFS Coordinators who are conductingin-house evaluations a step-by-stepinstructional guide for planning and executinga program evaluation.

  • vi Table of Contents

    Safe and Drug-Free Schools

    NOTE

    In order to illustrate certain points, the fictitious district,Oceanside, and its implementation of the Life Skills Train-ing Program (LST) will be used throughout this handbook.

    Life Skil ls Training (LST) focuses on (1)developing personal and social skills to improvestudent’s general competence and reduce themotivation to use alcohol, tobacco, and otherdrugs, and (2) improving students’ ability toapply these skills to situations in which they mayexperience social pressure to use drugs.

    The LifeSkills program consists of three majorcomponents that cover the critical domains foundto promote drug use.

    Drug Resistance Skills

    Personal Self-Management Skills

    General Social Skills

    How to use this handbook:

    This handbook is not meant to be read from beginning to end all at one time likea novel. Use it primarily as a reference to aid and improve your understandingof the various activities of program evaluation.

    This handbook begins with a short presentation of some elementary programevaluation concepts. The sections that follow are divided into evaluation phases;each outlining the basic tasks involved in the evaluation process from initialdesign plans to report writing. Additional resources are listed for more ad-vanced instruction in specific areas if necessary.

  • Asking relevant questions

    Program evaluation has to do with collecting information about a program inorder to answer questions asked about it. The types of questions asked whenevaluating a prevention program in particular primarily focus on issues of as-sessment: ‘Did we accomplish what we set out to achieve?’1 Finding the an-swers to evaluation questions entails scrutinizing both program results and thequalities that make the program work.

    The specific type of information an evaluation seeks to investigate dependsupon its intended audience. The type of information an audience is looking forwill frame the scope of evaluation questions developed. These questions, inturn, will determine the evaluation design and dictate the kinds of informationcollected that will help answer the evaluation questions.

    More than just a matter of opinion

    In order to establish evaluation findings as objective, the evaluation must becarried out in a systematic manner based on scientific methods of research.This means following procedures that obtain reliable and valid knowledge rel-evant to the program under evaluation.

    OVERVIEW

    EvaluatingThe Effectiveness

    Of Prevention Programs

    1 Hawkins, J.D., and Nederhood, B. (1987) Handbook for Evaluating Drug and AlcoholPrevention Program. p.2.

    1

    What is Program Evaluation?

  • 2 Overview

    Safe and Drug-Free Schools

    This handbook is organized around the most basic phases of the evaluationprocess regardless of the particular evaluation model employed. They consistof:

    Phase I: Preparing for the evaluationPhase II: Choosing the right evaluation questionsPhase III: Designing a data collection planPhase IV: Analyzing and interpreting the data collectedPhase V: Reporting the findings

    Within each of these phases, steps are outlined to provide you, as the evalua-tor, with the means to conduct an evaluation that can reliably substantiate in-formation beyond mere opinion.

    Need for reporting accurate results

    The United States Department of Education must report to Congress on theeffectiveness of prevention programs, and their ability to reduce violence, alco-hol, and other drug use in K-12 grades as funded under the SDFSA. In order tomeet this requirement, local education agencies are asked to evaluate the ac-complishments of their funded prevention programs. Providing accurate in-formation within the context of a quality evaluation does more than just satisfyfederal accountability. More importantly, accurate information provides localeducation agencies with the means to make meaningful decisions about whichprevention strategies to implement and which policies to adopt.

    How Do We Know Our Prevention Programs AreActually Changing the Behavior of Our Youths?

    Effects are not always immediately apparent

    The ultimate goal of any prevention program is to change the attitudes andbehaviors that put youths at risk. A unique characteristic of prevention pro-grams is that they seek to avert a problem before it starts. When preventionprograms are evaluated, it becomes necessary to show that something did nothappen which would likely have happened if the prevention program servicehad not been provided.

    A second characteristic of prevention programs is that results cannot always beassessed immediately. At the time of program completion, participants may beable to demonstrate a gain in skills or knowledge, a change in attitudes, percep-tions, and/or intentions. Changes in behavior, however, may not manifest them-selves for months or even years following the program. Measuring success rates

  • 3

    Handbook for Coordinators

    of prevention programs, therefore, becomes more complicated than measuringthe success rates of something like a job-training program for example. Thequestion then becomes, how do we evaluate the effectiveness of preventionprograms?

    Support of scientifically based research, in general

    There already exists a body of scientifically based research2 that demonstratesthe positive effects of the various SDFS prevention programs and activities cur-rently in use. This body of evidence supports claims that these programs influ-ence participants, not only in reducing the use of violence, alcohol and otherdrugs among youth, but also in providing youths with beneficial skills that canbe incorporated into their lives. These studies, which demonstrate the effec-tiveness of those prevention programs, assume that they work within any schoolgiven the targeted population.

    NOTE

    SDFS Coordinators need to know whether the implemen-tation of a specific program is effective for the givensituation within their district. In other words, is the pro-gram applicable to district-specific conditions? Is itproducing positive results efficiently within the targetgroup?

    Demonstrate evidence of program effects, in specific

    By using sound methodological evaluation procedures, district coordinatorscan assess whether the program is accomplishing what it was designed to do.Assessment simply requires asking evaluation questions concerning how wellthe program objectives were accomplished and if the program goals wereachieved.

    Evaluating Prevention Programs

    2 For state code definition of scientifically based research, see Appendix 1

  • 4 Overview

    Safe and Drug-Free Schools

    Gathering evidence to demonstrate the impact of a program on its participantsrequires the formulation of an evaluation design plan that includes:

    Assessing how faithfully to program design the program hasbeen delivered;

    Measuring program outcome objectives using various and ap-propriate student performance standards;

    Establishing baseline and comparison data to which programoutcome data may be compared.

    Building these three components into a program evaluation is essential to docu-ment evidence of program effect.

    Getting StartedEach district’s evaluation projects will vary depending on resources available,on the scope of the evaluation, the quality of evaluation questions, as well asthe evaluation design to be implemented.

    All evaluation projects should produce:

    A documentation of what happened in the program,

    A description of which strategies worked best in the program,

    A measurement of program outcome objectives.

    At the onset of an evaluation, address such basic questionsas:

    • Who is your audience?• What type of information does this audience want to know?• What activity/activities can you measure to answer your

    evaluation question(s)?• How will you collect information about these activities to

    demonstrate evidence of effect?• From whom and from what source will information be

    provided?• What is the timeline for data collection?• In what form and by what means will results be reported to

    your audience?

  • 5

    Handbook for Coordinators

    Things To Remember

    1. There is no perfect one size fits all recipe that deter-mines success or failure of an evaluation. Eachdistrict’s situation is different and will require varia-tions within evaluation activities.

    2. A program evaluation does not have to be overlycomplex. A narrowly focused evaluation yieldingmeaningful results is more useful than one that triesto bite off more than it can chew.

    3. A program evaluation does not have to be some-thing external to staff responsibilities. Many evalu-ation activities are already a part of staff duties.

    4. This evaluation project is not just about account-ing for outcomes, but also about gaining relevantinformation in order to make meaningful decisionsto improve the implementation of a program.

    5. Ultimately the results of the project should benefitour youth.

    Things to Remember

  • 6 Overview

    Safe and Drug-Free Schools

    NOTES

  • Making the effort to follow these steps in your evaluation project will be wellworth the time and effort.

    Step 1: Select a program to evaluate• Choose a program that is worth the effort of evaluation.

    Step 2: Identify key stakeholders• Decide which stakeholders take priority in regard to this evalua-

    tion project.• Identify information that the key stakeholders want to know from

    the evaluation project.• Establish a relationship with key stakeholders in order to garner

    their support for the evaluation, particularly those who deliverthe program.

    Step 3: Establish an evaluation team• Enlist district staff, school board members, teachers, school ad-

    ministrators, volunteers, members of collaborative communityservices, and evaluation consultants to serve as resources duringall or part of the evaluation project.

    Step 4: Consider a budget• Make initial and adequate budget computations.

    Step 5: Decide how to keep track of this evaluation project• Establish a system to keep evaluation materials organized.

    Step 6: Establish an initial timeline• Schedule completion dates for each step.

    PHASE I

    Preparing For TheEvaluation

    7

  • Safe and Drug-Free Schools

    8 Preparing For The Evaluation

    Each district should choose an Alcohol, Tobacco, or Other Drug (ATOD) or aViolence prevention program that addresses the district’s SDFS Priority Goals.If only one SDFS program is funded, then the decision has already been made,and you can move on to the next step.

    If that is not the case, consider the following list:

    Give preference to promising programs over proven programs.More systematic knowledge is needed to support the implemen-tation of promising programs, since promising programs have atwo-year limit to demonstrate a positive impact on program out-comes.

    Consider choosing a program in which you are spending the bulkof your funds. Determining the extent to which desired resultsare gained in a given situation is valuable information to have,especially when resources and personnel are limited.

    Consider choosing a program you are especially proud of in termsof its success. A sound evaluation will provide more support toyour claims.

    In addition, also consider:

    Is there a vested interest in a program within your school districtand/or local community? Is it a program that is of particular in-terest to stakeholders? If so, consider choosing this program toevaluate.

    Can information that stakeholders want to know about the pro-gram be collected through your evaluation efforts?

    Is there a program that is highly charged politically? If so, willevaluation efforts be compromised? Or, instead, will the evalua-tion efforts provide means to a resolution of the political issue?

    Is this a program that is going to be continued? If not, is the evalu-ation effort worthwhile?

    Is this a program that may be easily modified, based on recom-mendations that emerge from the results of the evaluation?

    Are the goals and objectives of the program measurable, giventhe resources available?

    If you are using a commercial program, does it come with a pack-aged evaluation kit? Does it include pre and post test tools?

    Select A Program To Evaluate Step 1

  • 9

    Handbook for Coordinators

    Worksheet 1.1 Notes for Program Selection

    Name of Program:

    (List reasons for selecting this program)

    Select A Program

  • Safe and Drug-Free Schools

    10 Preparing For The Evaluation

    Stakeholders within a district include any group of people or organizationinvolved in or affected by the performance and results of the program underevaluation. These might include students, parents, teachers, counselors,administrators, local advisory councils, the local law enforcement agency,members of collaborative community services, or state and local agencies. Whenbeginning this evaluation project, it is important to identify key stakeholdersfor the following reasons:

    Stakeholders within a school district are a primary audienceof the evaluation report. Identifying their concerns will pro-vide added focus when setting the goals of the evaluation.

    Key stakeholders can help to determine relevant evaluationquestions. The types of information stakeholders want to knowabout the program depend upon their perspectives on the pro-gram. Drawing from a broad base of viewpoints when develop-ing evaluation questions leads to more relevancy of evaluationresults within the community.

    Key stakeholders, particularly those who are directly associ-ated with the program, must buy into this evaluation project.Garnering the support of key stakeholders will not only facili-tate the information and resources they can contribute, but willalso lessen the resistance to the final product and its recommen-dations.

    Key stakeholders can provide valuable resources to the evalu-ation project. Involving stakeholders in the planning and ex-ecution of the evaluation project not only recognizes their vestedinterests but, may also provide your evaluation efforts with ad-ditional resources, e.g., parents or other community volun-teers, data resources, computer support, in-house evaluationexpertise, etc.

    Identify Key Stakeholders Step 2

  • 11

    Handbook for Coordinators

    Identify Stakeholders

    Worksheet 1.2 Identify Stakeholders

    Stakeholder Vested Interest

    Type of information stakeholder would like to know as a

    result of evaluation

    Primary Audience

    Yes/No

    SDFS Coordinator and SDFS staff

    Yes

  • Safe and Drug-Free Schools

    12 Preparing For The Evaluation

    Each district must establish an evaluation team consisting of stakeholderrepresentatives who will take an active role in all or part of the evaluationprocess. There are many advantages to such an evaluation team:

    No single person has to take full responsibility of the entireproject, although individual responsibilities have to be clearlydesignated.

    Spreading the responsibilities of the project among team mem-bers, particularly in the collection phase, can increase the amountof information collected.

    Inclusion of major stakeholders ensures that different perspec-tives and areas of expertise are brought to the project. Puttingtogether a team with diverse areas of expertise and access to vari-ous network pools provides a diversity of resources and willbenefit the evaluation project.

    Even if a third party professional is hired to conduct the evaluation project, it isstill a good idea to establish a team or committee to work with the evaluator toensure that the scope of the evaluation is met and that relevant questions areaddressed.

    Whom should the evaluation team consist of?

    One person from the district SDFS office, preferably the coor-dinator, should actively oversee the entire project. This personshould have sufficient background knowledge of SDFS programimplementation within the district to make key decisions in de-veloping evaluation questions and to provide oversight of theprocesses of this project.

    Establish An Evaluation Team Step 3

  • 13

    Handbook for Coordinators

    Additional team members can be drawn from various stake-holder groups. Some may serve as advisory members, while oth-ers, such as SDSF staff, or other district staff, must be capable ofassuming responsibility for necessary activities during the courseof the evaluation. Teachers, administrators, parent volunteers,and other collaborators may also serve in advisory or supportroles.

    A qualified evaluator can ensure that the evaluation design planhas internal integrity and validity.

    There are three ways to use an evaluator’s services:

    1 The district can contract an external evaluator to oversee theentire project; however, it is still essential that someone from theSDFS district office work closely with the evaluator as evalua-tion questions are developed.

    2 The district can recruit someone within the district staff whohas evaluation experience. This person can assist the evaluationteam, providing technical assistance where needed.

    3 The district can hire an evaluator as a consultant. The evalua-tion team can rely on this person for advise on methodologicalsoundness and solutions to various problems as they arise.

    Hiring an external evaluator provides the evaluation project with a valuableindependent perspective. External evaluators may also have access to additionalstaff, database management resources, as well as sufficient time to conduct aquality evaluation. Whether using an external evaluator or an in-house evaluator,both should know the history and context of the program under evaluation.

    NOTE

    For those who will be using evaluator services, refer toAppendix 2 for suggestions on how to hire a qualifiedevaluator.

    Establish an Evaluation Team

  • Safe and Drug-Free Schools

    14 Preparing For The Evaluation

    Evaluations require money, and the amount of money depends on thecommitment of district leadership, the size of the program, the number of specificevaluation questions that will be addressed, and the availability of evaluationresources, especially staff time. A dollar amount cannot be specified until anevaluation design plan is actually formulated. Theoretically, the more money adistrict is able to set aside for evaluation activities, the more program objectivesthe district can assess. Generally speaking, investing a small amount of moneyin the evaluation will allow a district to do little more than monitor someprogram activities and to count the number of participants. It will take moremoney to evaluate already existing data sources when assessing participantoutcome objectives, and it will take an even larger investment of funds toconduct data collection by means of surveys, interviews, focus groups, andobservations.

    General items to consider during budget development are:

    Salary for evaluation staff,Consultant fees,Travel expenses to cover travel to different sites, if necessary,Communication costs: i.e., postage, telephone calls,Printed materials: records and other documents, printing of datacollection instruments, and the final reports, andSupplies and equipment.

    NOTE

    Remember that budget revision will have to occur as youmake your evaluation design more concrete. In turn, youwill need to track availability of resources as you developyour evaluation design plan.

    Consider A Budget Step 4

  • 15

    Handbook for Coordinators

    To avoid confusion during the evaluation project, a meticulous record of allevaluation decisions and activities must be kept. In particular special care mustbe taken during the data collection phase. Given the highly detailed nature ofdata collection, any disorganization problems can compromise the accuracy ofyour evaluation results.

    There are numerous organizational methods for tracking projects. Since differentstyles work for different people, choose one or more that will work for yourevaluation team.

    A few examples include, but are not limited to:

    A dated journal that includes detailed information of decisions,tasks assigned, tasks performed, and notes about unplanned orunexpected situations;

    A system of file folders that are sorted by tasks, team members,dates, or sites;

    A filing cabinet, or set of boxes that are sorted by tasks, teammembers, data, or site;

    Notations made in this handbook for record keeping purposesor as a quick reference to the location of various project materi-als.

    Keeping a well organized, clearly documented project will facilitate the easewith which the final report can be written.

    Keep A Record Of All EvaluationProject Activities

    Record Evaluation Activities

    Step 5

  • Safe and Drug-Free Schools

    16 Preparing For The Evaluation

    Establish An Initial TimeLine Step 6

    Items 1 through 4 in the worksheet below pertain to the steps in Phase I. Inaddition, it is also part of the preparation phase to consider items 5 through 8which broadly span the duration of the evaluation project. A more detailedtimetable will be necessary later as more specific evaluation activities areplanned.

    Worksheet 1.3 Timetable During Phase 1

    Activity Schedule finish date

    1. Select a program.

    2. Identify stakeholders; talk to them.

    3. Assemble the evaluation team.

    4. Schedule meetings to formulate a design plan.

    5. Finished Design Plan

    6. Submit Evaluation Proposal to the Department of Education, Office of Safe Schools

    Fall of the school year program activities are being evaluated

    7. Finish Collecting Data

    8. Submit Final Report

  • 17

    Handbook for Coordinators

    Things to Remember

    Things To Remember1. Choose a program that is worthy of an effort of evalu-

    ation.

    2. Remember that you, the SDFS coordinator and staff,are key stakeholders as well. To conduct this evalu-ation project with a measure of independence andimpartiality (so as not to bias the results) it is im-portant to understand, early on, your own vestedinterest in this program.

    3. Each district must decide what type of evaluationteam to put together based on local circumstancesand available resources.

    4. Including a representation from each and everygroup and sub-group in your community is not arealistic goal for your evaluation team. Nor is it prac-tical to attempt to address every need of every stake-holder in the evaluation. Based on your ownexperience and knowledge of the nuances of yourschool district, you will have to prioritize both stake-holder representation and their concerns as is fea-sible.

    5. Keep precise records of all the various activities andcircumstances of the evaluation project. Not only isit a sound methodological practice, but it also servesas a mechanism to support credibility of your evalu-ation proceedings. If for some reason the evalua-tion data or findings are called into question, adetailed journal and a well-organized collection ofdata materials will verify that you were not falsify-ing information.

  • Safe and Drug-Free Schools

    18 Preparing For The Evaluation

    NOTES

  • Writing well thought-out, relevant evaluation questions is the secret to a mean-ingful evaluation. Keep in mind that the more specific the focus of the evalua-tion, the more efficiently the evaluation team can use its resources. In addition,the fewer evaluation questions asked, the more information that can be col-lected about each question. Conversely, the broader the scope of the evaluation,the less time and resources there are to devote to single issues.

    Step 1: Define the program• Explain the rationale for the program’s implementation within

    the district.• Describe the design features of the program.• Outline the program’s goals and objectives.

    Step 2: Set the scope of the evaluation project• Articulate what the evaluation team intends to accomplish with

    this evaluation project.• Think about what means it will take to accomplish this evalua-

    tion project.• Decide what kind of analysis to present to the intended audience.

    Step 3: Develop general evaluation questions• Write general questions about program implementation objectives.• Write general questions about participant outcomes

    Step 4: Write specific evaluation questions• Write evaluation questions in measurable terms.• Note questions that will not be addressed in this study.

    PHASE II

    Choosing The RightEvaluation Questions

    19

  • Safe and Drug-Free Schools

    20 Choosing The Right Evaluation Questions

    Evaluation questions stem from the information key stakeholders and otherprimary audiences want to know about the program. Usually, more answersare sought after than can be efficiently provided in a single evaluation effort.While choosing what to evaluate within a single program can be daunting, aclear understanding of the program itself will illuminate the elements of a pro-gram that should be addressed.

    With completion of this step the evaluation team will have a description of theprogram in concrete terms, as it is designed to be implemented. This will be help-ful in the following ways:

    Clearly identified program characteristics, goals and objectiveswill give the intended audience the necessary background tounderstand the scope of the evaluation study.

    The evaluation team should identify the selected program’simplementation objectives and participant outcome objectivesin precise terms. The more precise the objective, the easier it be-comes to link specific operations to specific outcomes. This pro-cess will help generate clearly focused and manageableevaluation questions.

    A clearly stated, well-understood program design will enablethe evaluation team to make decisions about what componentsof the program to focus on and to select program objectives thatare feasible to evaluate within the scope and time frame of theevaluation.

    Clearly defining terms and stating program objectives enablesevaluators to compare actual implementation of the programwith the program design as it was intended to be implemented.

    Define The Program Step 1

  • 21

    Handbook for Coordinators

    TASK Describe District BackgroundInformation

    This section should briefly describe any district demographic information thatwould help the audience understand the rationale for implementing this pro-gram. These might include:

    • County or district population,• Level of unemployment,• Number of juvenile arrests,• Number of elementary schools,• Number of middle schools,• Number of high schools,• Percent of free and reduced lunches,• Number of non-public schools within the county,• List of risk factors and protective factors as assessed in your needs

    assessment,• Other school district characteristics that you feel are relevant.

    TASK Delineate a Program Overview

    • Give the name of the program, explain any acronyms.• Identify the type of program: ATOD, Violence Prevention, or both;• Identify state SDFS goals or district-developed goals the program

    is addressing;• List risk factors and protective factors addressed specifically by

    this program;• Provide a general description of the program design:

    • What are the basic components of the program?• If the program is a commercial package, what are the manu-

    facture specifications?• List the materials and resources required to administer program;• Summarize program costs.

    Define the Program

  • Safe and Drug-Free Schools

    22 Choosing The Right Evaluation Questions

    Clarifying Program Goals and ProgramObjectives

    For the purposes of this evaluation project, and particularly the next task, “pro-gram goals” and “program objectives” assume specific and separate meanings.

    Program goalsssss address the overall purpose, or mission, of a specific pro-gram. They outline the conceptual scope of what the program plans toachieve. Program goals are stated in a general fashion and denote in-tent. In the case of SDFS projects, for example, a five-year goal may be toreduce vulnerability to pro-drug social influences by a certain amountacross the district. Used in this manner, program goals provide the overalldirection for the program. Do not confuse program goals with the 5-YearProject Goals. These refer to the overall goals for the state regardless ofthe programs implemented. The program goals defined here refer spe-cifically to the program to be evaluated.

    Program objectives are concerned with the design and implementationof specific activities within a given program, as well as the range ofoutcomes expected as a result of that program. An example of a pro-gram objective would be one that addresses providing adolescents withthe skills to handle social situations with confidence.

    Program objectives can be categorized into two types3:• Program implementation objectives• Participant outcome objectives

    Program implementation objectivesObjectives of this type concern themselves primarily with processes, includingidentification of target populations, the manner in which specific skills aretaught, staff, material resources required, and the scheduling of planned activi-ties throughout the school year. Program implementation objectives address theprogram’s effect on outcomes in terms of the efforts made in design and operation of theprogram.

    It is important to understand that the manner in which these objectives areactually applied during the life of the program directly affect program out-comes.

    3 Use of these two terms is inspired by, but not limited to, the approach in theAdministration of Children, Youth, and Family’s The Programmer’s Guide to Evaluation.

  • 23

    Handbook for Coordinators

    Participant outcome objectivesObjectives of this type address the anticipated changes in participant knowl-edge, skills, perceptions, attitudes, intentions, and/or behaviors that occur as aresult of program implementation. Ultimately, anticipated changes in partici-pants are reductions of ATOD use and violence among youth. The very natureof prevention programs is such that expected outcomes often do not occur im-mediately following program implementation. It may take months or years af-ter program delivery before changes in participants’ behavior come to fruition.

    Program Goals and Objectives

    TABLE 2.2 Program Participant Outcome Objectives Participant Outcome Objectives Address: Oceanside District Examples

    What you expect to happen to your participants as a result of your program? Immediate Results: The expectations about the changes in a participant’s knowledge, skill, intentions, attitudes, and/or perceptions, as well as behavior (when applicable) immediately after completion of the program.

    Immediate Results:

    • Provide students with the necessary skills to resist social (peer) pressures to smoke, drink and use drugs;

    • Help them to develop greater self-esteem, self-mastery, and self- confidence;

    • Enable children to effectively cope with social anxiety;

    • Increase their knowledge of the immediate consequences of substance abuse;

    • Enhance cognitive and behavioral competency to reduce and prevent a variety of health risk behaviors;

    • Reduce vulnerability to pro-drug social influences.

    Longer-term Outcomes: The changes in behavior anticipated to follow the immediate results. These can be reflected in rates of expected change, as stated in the district’s established Priority Project Goals 2005 or for any time period in the interim such as the 1-Year Outcome Objectives.

    Longer-term outcomes: • Decrease drug abuse risk by

    reducing intrapersonal motivations to use drugs

    • Cut tobacco, alcohol, and marijuana use 50%-75%.

    • Cut polydrug use up to 66%. • Decreases use of inhalants,

    narcotics, and hallucinogens

  • Safe and Drug-Free Schools

    24 Choosing The Right Evaluation Questions

    For the purpose of program evaluation, therefore, participant outcome objec-tives need to be separated into two types: (1) ones that occur immediately fol-lowing the completion of the program and (2) those that occur in thelonger-term4. (This will be discussed in greater detail in Step 1 of Designing aData Collection Plan.)

    4 Discussion of outcome objectives as immediate and longer-term has been taken fromHawkins and Nederhood (1987), but is not limited to this source. Program outcomes mayalso be categorized into immediate, intermediate, and long-term outcomes. This wouldbe suited for programs that have outcomes that can be clearly delineated into three suchstages and distinctly measured as such. Use of a three stage categorization of programoutcomes is best utilized for a program evaluation that can extend beyond a 1-year schooltimeframe. For more information see for example, McNamara (1999) Basic Guide toOutcomes-Based Evaluation for Non-Profit Organizations with Very Limited Resources.

    TABLE 2.2 Program Participant Outcome Objectives Participant Outcome Objectives Address: Oceanside District Examples

    What you expect to happen to your participants as a result of your program? Immediate Results: The expectations about the changes in a participant’s knowledge, skill, intentions, attitudes, and/or perceptions, as well as behavior (when applicable) immediately after completion of the program.

    Immediate Results:

    • Provide students with the necessary skills to resist social (peer) pressures to smoke, drink and use drugs;

    • Help them to develop greater self-esteem, self-mastery, and self- confidence;

    • Enable children to effectively cope with social anxiety;

    • Increase their knowledge of the immediate consequences of substance abuse;

    • Enhance cognitive and behavioral competency to reduce and prevent a variety of health risk behaviors;

    • Reduce vulnerability to pro-drug social influences.

    Longer-term Outcomes: The changes in behavior anticipated to follow the immediate results. These can be reflected in rates of expected change, as stated in the district’s established Priority Project Goals 2005 or for any time period in the interim such as the 1-Year Outcome Objectives.

    Longer-term outcomes: • Decrease drug abuse risk by reducing

    intrapersonal motivations to use drugs • Cut tobacco, alcohol, and marijuana use

    50%-75%. • Cut polydrug use up to 66%. • Decreases use of inhalants, narcotics, and

    hallucinogens

  • 25

    Handbook for Coordinators

    TASK Outline Program Goals andObjectives

    Using the preceding definitions as a guide, outline the program’sgoals and objectives based on the following suggestions:

    1. State the mission (or overall goal) of the program.

    2. Describe the overall program objective(s).Again, these are not the 1-year outcome objectives established in thegrant application, but the overall agenda the program is designed toaccomplish, i.e., teach children specific interpersonal skills provid-ing them with the ability to say no to drugs in peer pressure situa-tions.

    3. List specific program implementation objectives.• What activities or curriculum or other services will be delivered?• For whom is the program designed?• Identify the target population for whom this program is designed;• In which school(s) is the program being administered?• Which students will participate?• How will the program be built into the existing school(s)?• Who will administer the program activities?• How will the program activities be administered?• What is the schedule of activities throughout the school year?• Other information relating to the planned implementation of the

    program

    4. List immediate participant outcome objectives.These include what is expected to change in the participant’s knowl-edge, skill, perceptions, attitudes, intentions, and/or behavior im-mediately after the completion of the program. These will be specificto the content of the prevention service delivered.

    5. List longer-term anticipated outcomes of the program.These are the anticipated changes in behavior, perceptions, attitudes,and/or intentions some time after completion of the program. Theseare both specific to the content of the prevention service deliveredand related to the corresponding 5-Year Goals of the district.

    Program Goals and Objectives

  • Safe and Drug-Free Schools

    26 Choosing The Right Evaluation Questions

    Worksheet 2.1 Defining the Key Aspects of Your Program

    Mission statement: What does this program intend to accomplish? ___________________________________________________________________ ___________________________________________________________________ Target Population: What are the important characteristics of the planned targeted population? ___________________________________________________________________ Linking Program Processes to Participant Outcomes

    What are the key targeted Risk or

    Protective Factors?

    What specific implementation objective

    addresses the risk or protective factor listed?

    What are the immediate outcomes expected? , I.e., skills gained, or change in

    intentions

    What are the longer-term anticipated

    outcomes? i.e., 1 yr Program

    Outcome Objectives

  • 27

    Handbook for Coordinators

    Begin to establish the scope of the evaluation project by setting goals for theevaluation itself. Decide what the evaluation team can achieve with this evalu-ation project; then, conceptualize what it will take to accomplish those goals.

    TASK Set Specific Evaluation Goals For theDistrict Evaluation Project

    The focus of the SDFS evaluation project in general centers on whether theprograms currently in progress are working effectively within districts. Dis-trict evaluation teams must focus on setting goals for the evaluation projectthat are specific to the circumstances within their own district. For example, ifthe program is a large one, a reasonable goal may be to evaluate only a singlegrade level of a multi-grade level program. If issues of program delivery are amajor concern, then an appropriate goal would be to place emphasis on anevaluation of program implementation objectives. If the program happens tobe in its first year of implementation, it would be appropriate to set a goal toestablish a baseline of outcome information with which to compare future years’outcomes.

    Use these questions as prompts to develop goals of theevaluation:

    • What is the rationale for choosing this program to evaluate?

    • What information would the SDFS Advisory Council, the SchoolBoard, the school administration, or other primary audiences, liketo learn from this evaluation?

    • What decisions do you want to make as a result of this evalua-tion?

    • If it is a large program, do you want to document just one compo-nent of it?

    • Which program objectives take priority in this evaluation project?

    • Is there a particular component of the program that would ben-efit most from a careful examination?

    Set The Scope OfThe Evaluation Project

    Step 2

    Scope of the Evaluation Project

  • Safe and Drug-Free Schools

    28 Choosing The Right Evaluation Questions

    NOTE

    An evaluation project can have as many goals as neces-sary; however, too many goals will convolute the evaluationprocess. Each goal should clearly reflect what it is theevaluation team wants to learn.

    Worksheet 2.2 Setting the Scope of your Evaluation Project

    Goal(s) of the Evaluation Reason for choosing this Goal

  • 29

    Handbook for Coordinators

    Task Envision the Major Aspects of TheEvaluation Project

    This is not the point to outline specific details of the evaluation plan. Rather, itis the time to conceptualize the entire process of the evaluation so that the evalu-ation team can assess available resources, time limitations, and effort involvedin conducting the evaluation. It is time to get an idea of what evaluation strat-egies are feasible.

    Important questions for the evaluation team to consider include:

    • Which program activities can be examined that fit within the scope ofyour evaluation? For example, if you are evaluating only a singlecomponent of your program, what service or curriculum will youevaluate?

    • How might evaluators examine these program activities? For example,would surveys best measure participant behaviors and attitudes?Would teacher interviews on program delivery provide additionalrelevant information?

    • What types of information will be compared with your results: Schooldisciplinary referral records, already existing state or county youthsurveys?

    • Who can responsibly collect information (data) for the evaluationproject?

    • Will information be collected from the entire target population or justa sample of them? What is the rationale for sample selection?

    • Can you foresee any potential problems that might hinder theevaluation team’s ability to accomplish the goals of the evaluation?

    • What is the best way to report the findings in a useful way to yourintended audience?

    • How much time do you have to collect data?

    NOTE

    Again, do not try to outline the specific details of an evalu-ation plan here. The purpose is to envision the evaluationproject as a whole from start to finish. As the evaluationteam looks at its own situation, it will become apparentwhat can be realistically accomplished with the evalua-tion project.

    Scope of the Evaluation Project

  • Safe and Drug-Free Schools

    30 Choosing The Right Evaluation Questions

    General evaluation questions follow from the goals set for the evaluation pro-cess. For example, a general evaluation question regarding program implemen-tation would be, ‘Is the program being implemented with fidelity?’

    There is no specific formula for writing evaluation questions. They must beasked in a manner that requires a response that can be accurately measured,analyzed, and reported. As a rule of thumb, avoid vague and complex evalua-tion questions. It is far more constructive to keep the scope of the evaluationproject small and develop meaningful results than to try to accomplish too muchand not do an adequate job.

    TASK Review Previous Evaluation Researchon the Same or Similar Programs.

    Other evaluation studies will provide ideas on evaluation questions, data col-lection methods, and depth of analysis of findings. Check for previously pub-lished studies online or ask a resource librarian for help. If the program isproduced commercially, contact the company directly for documentation onprevious evaluations. (Also inquire about any evaluation kits or data collectiontools available from the publisher?)

    TASK: Ask Questions About Program ObjectivesThat Are Relevant to the Goals of the Evaluation

    Evaluation questions need to be concerned with how well program objectiveswere met. The program objectives under question must relate to, or fall withinthe scope of, the goals of the evaluation.

    The number of general questions to be asked depends upon what the evalua-tion team deems to be efficient relative to the size and focus of the evaluationproject.

    If helpful, refer back to Worksheet 1.2 to review what it is that key stakeholderswant to find out about the program. Link this information to specific programimplementation objectives and participant outcome objectives as described inStep 1. Generate a list of questions from these two sources.

    Step 3

    Develop General EvaluationQuestions

  • 31

    Handbook for Coordinators

    TASK Select Which Questions to Evaluate

    More than likely the evaluation team will come up with more questions thancan be evaluated in this study. It is not feasible to address every question, nomatter how significant. The evaluation team must decide which question(s)take priority.

    Suggestions for selecting questions:

    • Which stakeholder concerns do you want to satisfy?• Who among your audience will make good use the evaluation

    information provided in the final report?• Is there something about your program that is not being answered

    adequately elsewhere? Would the evaluation questions you se-lect address that something?

    • Would the information resulting from the selected evaluationquestions be considered interesting?

    • Will the resulting evaluation information contribute to new knowl-edge about the program?

    • Do you have the resources available to answer the selected ques-tion adequately? Costs include labor, a reliable method ofcollection, a quality data source, adequately trained evaluationstaff, etc.

    • Do you have a sufficient amount of time to answer the selectedquestion adequately?

    General Evaluation Questions

    TABLE 2.3 Examples of General Questions

    1. How does the LST program affect the health knowledge of 6th to 8th grade students towards tobacco, alcohol, and other drugs?

    2. Is there any evidence of changes in behavior or attitudes towards tobacco, alcohol, and other drugs in 6th to 8th grade students who have participated in the LST program?

    3. Do teachers follow the program implementation instructions as planned?

    4. Do any variations in the original LST design plan such as targeted student population affect program outcomes?

  • Safe and Drug-Free Schools

    32 Choosing The Right Evaluation Questions

    Worksheet 2.3 Selected Evaluation Questions

    General Question(s) Rationale for Evaluating this Question

  • 33

    Handbook for Coordinators

    TASK Note Questions Not Selected

    It is helpful to make a list of those questions that will not be evaluated5. Inevi-tably, at some point later in the evaluation, you will find yourself asking, “Whydidn’t we evaluate this?” Refer back to this list to remind yourself of yourrationale.

    General Evaluation Questions

    5 Hawkins and Nederhood, p.9

    Worksheet 2.4 Evaluation Questions Not Selected

    General Question(s) Rationale for Not Evaluating this Question

  • Safe and Drug-Free Schools

    34 Choosing The Right Evaluation Questions

    In order to plan a data collection procedure it is necessary to develop specificevaluation questions for each general question which will actually measurewhat is being asked.

    TASK Develop at Least One SpecificEvaluation Question for EachGeneral Question.

    General evaluation questions need to be broken down into specific questionsthat ask for concrete evidence. Each specific question should specify an activitythat can be clearly measured or observed, e.g., occurrence of fights on schoolgrounds, self-reported attitudes about drug use, number of students complet-ing the program. Specific evaluation questions are the basis of actual studentsurveys, teacher implementation questionnaires, and other forms or recordsthat you may use to collect the desired information.

    Often there is more than one type of evidence that will answer ageneral evaluation question. For example:

    When tracking the impact of the program on participants, evalu-ators may look at gains in skill, changes in attitudes, and/orchanges in behavior. These are three distinct elements all of whichindicate an answer to the same general evaluation question.

    When documenting the actual delivery of a specific programactivity, evaluators may want to ask about teachers’ training ses-sions prior to the program. In addition, specific questions can beasked about how teachers actually used the program curriculain the classroom. Both are distinct elements that are equally validand important to the evaluation of program implementation.

    Step 4

    Write Specific EvaluationQuestions

  • 35

    Handbook for Coordinators

    Specific evaluation questions need to be written in a manner thatpoints to only one distinct facet of change. For example:

    In a program that aims to reduce vulnerability to pro-social in-fluences, asking students if they experience a positive change intheir perceptions of drug-use after participation in the programexamines just one facet of change within that program. Testingstudents knowledge about refusal skills is another.

    Specific questions concerning implementation of program servicescan be stated in a manner that asks for a description. Forexample:

    A numerical count: How many students participated in a spe-cific scheduled activity? How much time is actually spent onprogram curriculum in the classroom?

    A verbal account: In what settings is the program being offered?Which components of the program were actually implementedby each teacher? How did teachers actually implement programactivities within their own classrooms?

    Specific questions concerning program outcome objectives areusually phrased in a manner that establishes a relationshipbetween some facet of the program and a desired change inoutcome. For example:

    Is there a decline in fighting incidents?

    How does this decline compare with schools without the sameprogram over the same period of time?

    What is the difference between students’ knowledge of tobaccobefore and after program curriculum was taught?

    Specific evaluation questions concerning program outcomes will reflect an in-quiry into changes over time resulting from program participation, or an in-quiry into the differences in changes over time because of age, gender,socio-economic status, or other characteristics.

    Specific Evaluation Questions

  • Safe and Drug-Free Schools

    36 Choosing The Right Evaluation Questions

    TABLE 2.4 Examples of Measurable Questions General Evaluation Question Specific Evaluation Questions

    1. How does the LST program affect the health knowledge of 6th to 8th grade students towards tobacco, alcohol, and other drugs?

    1a. Do 8th grade students show significant improvement in knowledge about the detrimental effects of tobacco on individuals’ health?

    1b. Do students show a significant increase in knowledge about effects of alcohol on the central nervous system? (Student surveys might include specific knowledge questions about the risk of lung diseases, or blood-alcohol levels, etc.)

    2. Is there any evidence of changes in behavior or attitudes towards tobacco, alcohol, and other drugs in 6th to 8th grade students who have participated in the LST program?

    2a. Is there a significant decrease in the number of student participants who report using tobacco products?

    2b. Is there any difference in attitudes

    concerning use of alcohol or other drugs. Between 6th, 7th, or 8th graders?

    3. Do teachers follow the program implementation instructions as planned?

    3a. Are teachers working within the timeframe recommended by the LST program instructions?

    3b. Are teachers actually teaching the LST program curriculum as instructed in the teaching manual?

    4. Do any variations in the original LST design plan such as targeted student population affect program outcomes?

    4a. Are the students receiving the LST curriculum within the target age group?

    4b. Do the students receiving the LST curriculum fall within the risk factors listed by the LST program?

  • 37

    Handbook for Coordinators

    Specific Evaluation Questions

    Select specific evaluation questions based on what you and other stakeholderswant to find out. Follow the same list of considerations as given for selectinggeneral evaluation questions in order to determine which specific evaluationquestions to pursue.

    Worksheet 2.5 Writing Your Specific Measurable Questions

    List your general evaluation questions here

    Write specific evaluation questions for each general

    question

  • Safe and Drug-Free Schools

    38 Choosing The Right Evaluation Questions

    Things To Remember1. It is not necessary to construct a highly detailed

    description of your program. Listing the major pro-gram objectives should be enough to “kick start”the process of developing evaluation questions. Asyou develop your evaluation plan it will becomeevident which program objectives will require moreattention to detail.

    2. The goals of the evaluation reflect what the evalua-tion team wants to accomplish, which in turn re-flects what the key stakeholders and other primaryaudience want and need to learn about the pro-gram.

    3. Evaluation questions always need to relate to whatkey stakeholders and the primary audience wantsto find out about the program as a result of theevaluation.

    4. Delineate the goals of the evaluation clearly. Themore clearly delineated, the easier it will be to writequestions and formulate a design plan.

    5. All evaluation questions must relate to how well theprogram is working in the school district.

    6. Be prepared to refine specific evaluation questionsas you work through the data design and collectionphase of the evaluation project.

  • The purpose of this phase is to design a procedure in which to collect the infor-mation necessary to answer selected evaluation questions. This section requiresthe most detailed planning. Now is the time to decide what relevant data shouldbe collected in order to prevent the collection of useless information later. Re-member to be flexible: although steps are laid out in sequence, earlier steps inthis section may need to be revised as design issues are worked out in latersteps.

    Step 1: Determine what data must be collected in order toanswer each evaluation questionCreate clearly defined measures that relate directly to the evalua-tion questions.Choose a means to compare the program results with non-pro-gram circumstance.

    Step 2: Determine where to find the best source of data inorder to answer each evaluation questionDecide from whom or where to get the necessary source of infor-mation.

    Step 3: Determine how to collect the dataSelect the data collection procedure best suited to the needs of theevaluation project.

    Step 4: Determine how much data to collectDecide on sample size.

    Step 5: Develop an analysis planMake sure appropriate information is collected to answer specificevaluation questions.

    Step 6: Determine when to collect the dataOutline specific collection times.Determine latest possible completion dates.

    Step 7: Attend to data collection issuesBe aware of responsibilities to respondents.Determine who will collect the data.Keep track of data in an organized fashion.

    Designing AData Collection Plan

    39

    PHASE III

  • 40 Designing A Data Collection Plan

    Safe and Drug-Free Schools

    NOTE

    Each evaluation question must beanswered in a way that ensures its:

    Validity: The extent to which your datacollection procedures accurately measurewhat they are intended to measure.

    Reliability: The extent to which the datacollection procedures, which include boththe techniques to collect the data and theactivities of the data collectors, produceconsistent results each time the procedureis administered.

    Credibility: The extent to which you canprove that you are not fabricating yourfindings

  • 41

    Handbook for Coordinators

    Data are facts, statistics, or other items of information that have to do with thisevaluation. The types of data necessary for evaluation purposes will dependupon the program objectives and their related questions under investigation.

    In general, data collected for questions about program implementation (pro-cesses or functions) tend to be descriptive. These include such information asnumber of participants or the amount of time spent implementing a certainactivity. It may include teachers’ opinions or written descriptions about theprogram, information that can be obtained from interviews with teachers.

    An indicator is an empirical observation, or description, that signifies a rela-tive change in the relationship being measured. Data collected for questionsconcerning participant outcomes can usually be counted and measured. Spe-cific evaluation questions that concern an outcome ask about the change in asingle indicator such as the difference in self-reported attitudes that exists be-tween two administrations of a survey to the same respondents at differenttime intervals.

    As the types of questions differ, so will the measures best suited for evaluationof the program

    TASK Create Measures of ProgramImplementation (Program Processes)

    Implementation questions seek to measure the processes of the program. Theevaluation activities surrounding this type of information will document howclosely actual program implementation followed the initial design plan.

    Implementation measures can focus on three different aspects of program func-tions:

    Level of effort: involves documentation of staff time and re-sources invested in the scope and frequency of services delivered.

    Level of participation: involves tracking program completion,attrition, and attendance rates among participants.

    Quality of program delivery: involves documentation of the his-tory of the program with all its deviations from the design model.

    Determine the Data Needed

    Step 1Determine What Data IsNeeded To Answer The

    Evaluation Questions

  • 42 Designing A Data Collection Plan

    Safe and Drug-Free Schools

    Keeping careful records, or examining records already being kept as part of theprogram’s administrative organization, will yield counts of deliverables andparticipants, etc.

    Asking program staff more qualitative information about the process of theprogram functions during the school year, e.g., opinions on curriculum or ac-tivities delivered, and/or their self-perceptions of confidence in ability to imple-ment services, will yield more contextual information that is not captured byadministrative records.

    Table 3.1 lists some types of general questions asked about program imple-mentation objectives and the types of information collected as a result.

    TABLE 3.1 Measuring Program Implementation Objectives Types of General

    Questions Type of Information

    Collected Sources of Information

    Is the program being implemented as designed?

    Information on level of effort: the types of activities, services, or educational curriculum products being implemented; who received them, their duration and intensity

    Information on services come from program records, or interviews with program staff

    Is the program staff adequately trained to administer program components?

    Information on level of effort: characteristics of staff, how they were selected, training they received

    Information on staff come from program records, interviews with staff that administers the program or other program managers, training workshop evaluations

    Who will participate? Is the targeted population being served?

    Information on the level of participation: characteristics of the population, numbers of participants, how they were selected, attrition rates, etc.

    Information on participant selection strategies come from program records and interviews with program staff or managers

    What are some of the unanticipated outcomes of the activity?

    Information on quality of program delivery: document history of how the program was actually implemented throughout the school year

    Information on unanticipated outcomes can come from interviews with program staff, and participants, and parents;

    What changes could the program make to better achieve its outcome objectives?

    Information on the quality of program delivery: a compilation of the above types of information

    Information on factors that hinder or promote program implementation come from interviews with relevant program staff

  • 43

    Handbook for Coordinators

    TASK Create Measures of ParticipantOutcomes

    There are two general considerations to keep in mind when designing pro-gram outcome measurements.

    1. Prevention programs focus on changing behavior in a target popula-tion over time and sustaining that behavior beyond the duration of theprogram. In some cases, the desired behavior may not show up in pro-gram participants for months or even years after program duration. Thisdistance from the time of program involvement to the display of de-sired behavior would require a long-term assessment of individual pro-gram participants. Most school districts do not have the resources toconduct an evaluation that tracks the long-term participant outcomesof a single program. If it were a possibility, it would be an excellentbehavioral change to measure.

    2. Most locally funded evaluation efforts do not have the resources avail-able to set up a true experimental design that controls for all variables(various contingencies or circumstances) in order to show causation. Inother words, they cannot provide proof that the desired behavior iscaused directly by the program. There are too many other external in-fluences on participants to consider at one time.

    The key for this level evaluation, therefore, is to discover if things got betteror worse after the program was initiated. This can be accomplished by exam-ining trends in behavior recorded in school discipline records or in self-reportyouth surveys, for example. By providing this early evidence, evaluators candemonstrate a link between participation in the program and actualization ofdesired behavior.

    There are other related outcome measures that can be assessed immediatelyfollowing participation in the program. Measures such as:

    skills gained,

    knowledge gained

    changes in attitudes,

    changes in perceptions, and/or

    changes in intentions.

    Measures of Participant Outcomes

  • 44 Designing A Data Collection Plan

    Safe and Drug-Free Schools

    These can be directly associated with immediate participant changes which re-sulted from program participation. These measures gauge a program’s imme-diate effectiveness without waiting to measure changes in desired behavior.These measures are also very helpful in providing information for programimprovement.

    In deciding what indicators to use when accessing a programs’s outcomes, beclear on what level and type of change the program seeks to prevent or pro-mote. Make sure the constructed measure captures this information. For ex-ample, if evaluating an ATOD prevention program that aims to raise the initialage of first alcohol use, data collection must include a construct that measuresfirst time use of alcohol.

    Program outcome measures are typically assessed in one of these three ways:

    1. Self-Reports of program participants: Self-reports are the mostfrequently used. Studies have shown that self-report survey data arevalid and reliable enough to provide accurate estimates of drug andviolence related behavior6.

    2. Other reports such as parent questionnaires and/or interviews, policereports, school discipline referral data, institutional records. Recorddata is an adequate way to provide information, however, rememberthat not all behavior is captured in reports.

    3. Direct observation: Direct observations can be valuable as an outcomemeasure in certain situations, but it is costly and time and labor inten-sive to capture frequency of overt and hidden illicit behavior.

    Table 3.2 lists types of general questions asked about participant outcome ob-jectives and the types of information collected as a result.

    6 Hawkins and Nederhood, p. 35.

  • 45

    Handbook for Coordinators

    TABLE 3.2 Measuring Participant Outcome Objectives

    Examples of General Evaluation Questions

    Examples of the Type of Information Collected

    Examples of Sources of Information

    Immediate Outcomes: Is there a measurable difference in knowledge and/or skills participants gained after completion of the program?

    Measures might include indicators such as:

    • Level of students’ tested knowledge of program subject matter, and/or skills attained

    • Student grades • Performance on

    achievement tests • Attendance levels • Promotion rates

    Gather information from such places as:

    • Program survey and/or program test results

    • Local school records

    Is there a positive change in participants’ attitudes and perceptions about drug use after program participation?

    • Students’ self-reported confidence pertaining to their knowledge, abilities, skills, and success immediately following the program and later in their lives.

    • Students’ self-reported attitudes and perceptions immediately following program participation

    • Participant self-report surveys prior to and at exit of program

    Did some participants change more than others (across gender, race, and/or grade)? Why?

    • Characteristics of target population

    • School records • Student survey items

    about student characteristics

    Is there a measurable difference in violent occurrences after delivery of program services?

    • Participants self-reported behavior after program completion

    • School discipline referral data

    • Local law enforcement data

    • Self-report surveys prior to and at exit of program

    • Local school records • Law enforcement

    statistics

    Longer term outcomes: Is there a measurable reduction in violent behaviors (or drug use) among participants of this program compared to non-participants?

    • Comparison in trends of

    school discipline referrals and/or self –reported behavior

    • School Environmental

    Safety Incident Reporting System (SESIR)*

    • Florida Youth Substance Abuse Survey (FYSAS)*

    • Florida Youth Tobacco Survey (FYTS)*

    * See Appendix 3 for more specific detail on secondary data sources.

    Measures of Participant Outcomes

  • 46 Designing A Data Collection Plan

    Safe and Drug-Free Schools

    TASK Build Baseline Standards forComparison into the Design Plan

    To report only measures of participant attitudes and behaviors at time of pro-gram completion offers limited information about the program’s impact withina given community. In order to substantiate that participation in the programeffected a change in its participants, the evaluation design plan must incorpo-rate baseline standards with which to compare program outcome results. Theaim of these comparisons is to demonstrate that program participation doeshave an impact on its participants. There are basically three ways to demon-strate this:

    TABLE 3.3 Ways to Make Relevant Comparisons

    Standard for Comparison Method of comparison

    Baseline of Comparison

    Compare program participants before program begins and again after program ends.

    Pre and Post Tests: The pre-test establishes a baseline of the specific item(s) being measured. This baseline is compared to a post-test measure of the change in participants after completion of the program across the same items.

    Looking at same participants across two points in time. Can only draw conclusions about the program participants as compared with themselves before and after prevention service

    Compare program participants with a selected group that has similar characteristics but does not receive the prevention service.

    Use of a Comparison group: a pre-selected group with the same or similar characteristics as the program participants that do not participate in the program; measured on the same indicators as the program group.

    Looking at two groups within the same timeframe. Can draw conclusions of program’s impact on participants by comparing the rates of change between the two groups across the same indicators.

    Compare program participants with the larger population of Florida youth at a specified point in time.

    Use of Archival Data: examine specific indicators of the larger population from already existing data (for example, school discipline referral data); compare these with the same indicators of participants after completion of the program.

    These already existing data sources survey all students, not just program participants; comparing changes in participant attitudes and behaviors with similar indicators of attitudes and behavior in the larger population surveyed at a previous point in time.

  • 47

    Handbook for Coordinators

    The evaluation design must build in at least one method ofcomparison.

    A pre and post-test design of program participants is oneof the simplest forms of comparison. If this is the onlymethod of comparison used, then further support yourevaluation results with research from the literature whichindicates that the program or the strategies used are alreadyproven effective.

    Building a control group into the evaluation design canproduce even stronger evidence of program effectiveness.Under laboratory conditions, establishing an “exact”comparison between a control group and an “experimental”group is possible; however, within a specific schoolenvironment, a control group may not be an ethical orpractical possibility. Comparison groups, however, may bereadily identified if there is another classroom or school withthe same identified characteristics, e.g., grade level,associated risk and protective factors, as the targetedprogram participants.

    Finally, use of archival data provides a comparison betweenprogram participant outcomes and the correspondingattitudinal and behavioral trends of a larger specifiedpopulation. This situates overall program impacts within abroader context.

    Methods of Comparison

  • 48 Designing A Data Collection Plan

    Safe and Drug-Free Schools

    Determining the type of data to collect depends on the availability of datasources.

    Data can mainly be collected from two types of sources:

    1 Primary sources: These are sources in which the information collectedis directly for the purpose of the program evaluation. Program partici-pants, program staff, parents, are examples of primary sources fromwhich to gather data. Documentation of actual program implementa-tion within classrooms must be collected from teachers. In this instanceteachers are the primary source of program implementation informa-tion.

    2 Secondary sources: These are pre-existing data sources in which datahave been collected at a previous time for a purpose other than thisevaluation project. Administrative or other records compiled for theprogram itself, either at a previous point in time or during the currentprogram delivery are good sources of program implementation infor-mation. Pre- existing state surveys or discipline files are a good sourceof pre-existing data that may address program outcome objectives.

    Which data source to use will depend upon its relevancy to the evaluation ques-tions, its accessibility, as well as it’s availability. Collect both primary and sec-ondary sources of information whenever possible.

    The following is a list of useful secondary data sources available for use:

    · SESIR: School Environmental Safety Incident Report· FYSAS: Florida Youth Substance Abuse Survey· YTBS: Youth Risk Behavior Survey· FYTS: Florida Youth Tobacco Survey· School Climate Survey: locally determined· Student Discipline Reports on suspension and expulsion· School Discipline Reports: based on local discipline infractions

    Refer to Appendix 3 for tables that chart these available secondary sources ofinformation.

    Determine Where To Find TheBest Source Of Data To AnswerEvaluation Questions

    Step 2

  • 49

    Handbook for Coordinators

    There is often more than one way to collect evidence (data) to accurately an-swer an evaluation question. It is always practical to choose an approach that isefficient and feasible.

    Task Determine What Type of Procedure IsBest Suited to Collect Evidence forQuestions of Program Implementation

    Ttable 3.4 lists some methods of data collection most appropriate for the pur-poses of assessing implementation objectives. Remember, evaluation questionsfocus on how well the actual implementation of the program follows the pro-gram design plan.

    Consider:

    · What ‘best practices’ can be assessed from an evaluative study.What, if any, unanticipated outcomes resulted from program ac-tivities.What types of changes could be made to improve program deliv-ery.

    Evaluating program operations is more than a monitoring mechanism. It is ameans to tell the story behind p