training management system - ed

27
by Richard J. McCowan Training System Management

Upload: others

Post on 26-Jan-2022

4 views

Category:

Documents


0 download

TRANSCRIPT

byRichard J. McCowan

Training

SystemManagement

1

2

The Center for Development of Human Services is a continuing educationenterprise of the Research Foundation of the State University of New Yorkand a unit of the Graduate Studies and Research Divisionat Buffalo State College (SUNY).Funding is provided by the New York State Office of Childrenand Family Services.© 1998 SUNY Research Foundation/Center for Development of Human Services.All rights reserved

Center for Development of Human ServicesRobert N. Spaner - Chief Administrative Officer

[email protected] J. McCowan, Ph.D. - Director, Research & Evaluation Group

[email protected] State College (SUNY)1695 Elmwood AvenueBuffalo, New York 14207-2407Tel.: 716.876.7600Fax: 716.796.2201http://www.bsc-cdhs.org

3

IntroductionSince 1980, the Center for Development of Human Services (CDHS)has provided high quality competency-based training (CBT) to so-cial services agencies throughout New York State. These programswere based on needs identified in different ways including profes-sional judgment, focus groups, and comprehensive, structured sur-veys.

During this time, CDHS developed a competency-based trainingmanagement system which is “an integrated assembly of interact-ing elements designed to carry out cooperatively a predeterminedfunction” (Gibson, 1960). For the past 20 years, CDHS used thesystem to manage program activities for a large university-basedtraining organization that specializes in public-sector, social ser-vices training, primarily in New York State.

The concept of general systems theory was introduced in 1937 byLudwig von Bertalaffy, a biologist. He concluded that mathemati-cal models in biological, behavioral, and social sciences have similarstructures and that based on these similarities, it was possible todevelop a general theory to unify different scientific disciplines(Bertalaffy, 1968). Systems theory, therefore, is a set of interre-lated principles that explains and predicts the behavior of people,groups, or organizations. A well-designed system is an orderlywhole that clearly shows “the interrelationships of parts to eachother and to the whole itself (Silvern, 1965, p. 1).

By relating common principles of different systems, systems theoryimproves communication and provides a holistic scientific approach.The major functions of systems theory are to:

investigate the isomorphy of concepts, laws, and models invarious fields;

help make useful transfers from one field to another;

encourage development of adequate theoretical models in thefields which lack them;

4

Introduction Continued

minimize duplication of theoretical effort in different fields;

promote the unity of science by improving communicationamong specialists from different fields (Bertalanffy, 1968, p.15).

Mesarovic (1964) noted that systems theory must be sufficientlyabstract to encompass different types of specialized theories withconcepts and terms defined in context at the appropriate level ofabstraction. It must describe common features, but avoid specificaspects of particular systems. The system must clearly communi-cate how goals are achieved and describe interrelationships thatexist among goals, procedures, and outcomes. A clear statementof systems theory clarifies assumptions about human behavior thatunderlie a program. It describes the measures needed to analyzeprogram outcomes and establishes an iterative cycle of develop-ment, testing, and redesign (Scheirer, 1994).

In the early 1960s, systems theory dominated program planning andbudgeting in industry and defense (Schlesinger, 1963; Wildavsky,1964). It also exerted a significant influence on education andtraining. However, the term was not widely used until the spaceprogram popularized it with the words, “All systems are go!” At thepresent time, systems models exert a significant influence on train-ing and education.

The CDHS model possesses the characteristics of a true system whichfocuses on the most important aspects of training:

What should trainees learn that will improve their job perfor-mance?

Have the trainees mastered the competencies on whichtraining was based?

How did training contribute to improved organizationalperformance?

5

The system is a complete model that begins with statewide needsassessments by job title with data collected from local agencies.Staff use these data to develop specific, needs-driven curricula. Itincludes computerized procedures to schedule training and produceevaluation reports at the state, regional, and local levels using acompetency-based criterion-referenced evaluation design (CREST)(McCowan, McGregor, & LoTempio, S., 1983; 1984). The system in-cludes comprehensive follow-up procedures to analyze transfer oflearning at worker and agency level, and it provides remedial train-ing for trainees who fail to master important attitudes, skills, andknowledge covered during training.

Competency-based training (CBT) must have the following charac-teristics:

Clear job descriptions and program outcomes

Needs assessments based on job-related competencies

Structured hierarchy of domains, competencies, and objectives

Posttest assessment of trainee performance compared withclear criteria

Remedial training and OJT mentoring to assure trainee masteryof essential material

Many programs, several of which are nationally prominent and well-regarded, are incorrectly promoted as competency-based trainingbecause they do not assess trainee performance. These programshave clear goals and objectives and curriculums linked to theseobjectives, but they are not competency-based if they do notassess trainee performance and provide supplementary instructionfor persons who fail to master essential attitudes, skills, or knowl-edge. TMS is a complete system that collects evaluation data atthe reaction, learning, behavior and results levels described byKirkpatrick (1994) (see Table 1).

Introduction Continued

6

Table 1Four Levels of EvaluationLevel 1 Evaluate how well trainees liked a program, typically using aReaction questionnaire. Base items on job relevance, training effectiveness,

and trainer performance.

Level 2 Examine how well trainees mastered knowledge and skillsLearning included in training objectives. Typically, learning is measured by

multiple-choice items.

Level 3 Evaluate program-related on-the-job performance. Assess changeBehavior related to training objectives using questionnaires, interviews,

or direct observation.

Level 4 Relate training to organizational objectives (e.g., reductionResults in costs or absenteeism; increased productivity or quality).

Calculate cost/benefit or ROI (return on investment).

Level 4 assessment, which is based on results related to organiza-tions objectives (e.g., cost-benefit, return on investment, or workerperformance on the job), is seldom conducted, particularly in thepublic sector, primarily because of cost and accessibility to data.CDHS has conducted level 4 results-based studies. For example, amajor study involves an examination of reports completed by asample of approximately 400 New York City risk assessment work-ers who completed a 3-day training program conducted by CDHS.Copies of records completed by these workers will be evaluated bymultiple raters, and these ratings will be compared to the posttestperformance and supervisor ratings of the same workers. This pro-cess will determine if the posttest has predictive validity relativeto on-the-job performance.

7

Phases of Training Management SystemThe CDHS training management system begins with job analysis asstate and local agency staff collaboratively identify competenciesthat relate significantly to effective job performance. Successivephases flow from these competencies as training and assessmentpackages are developed and training programs are conducted. Dur-ing formative evaluation, training practices are modified using dataon trainee achievement and trainer performance. The complete pro-cess is illustrated in Figure 1, while specific procedures for eachphase are detailed in Figures 2, 3, 4, and 5.

Figure 1Mission

8

Figure 2Mission Phase of MTS

Goal Job Competence NeedsSelection Analysis Development Assessment

Define program Obtain current Form competency Design amission. job description. development committee. competency-based

needs assessment.

Define program Examine job tasks Identify domains. Conduct needsgoals and and activities. needs assessment.outcomes.

Describe performance List job Sequence domains. Analyze needsrequirements. requirements. assessment data.

Specify training Study behavior Develop competency Analyzeneeds. of competent format. performance gaps.

staff.

Identify existing Examine job Identify sources Identify trainee needs.training and descriptions of competencies.support resources. from other

agencies.

Identify program Analyze data Train staff to Describe trainingand staff fromother write competencies. program.performance sources.constraints.

Select management Prepare Develop pool List availablestaff. competency-based of competencies. training resources.

job description.

Describe training Survey staff Select relevant Identifyalternatives. to validate competencies. program constraints.

job description.

Assign staff Interview agency Edit competencies. Selectresponsibilities. supervisors supervisory staff.

to validatejob description.

Describe Submit Sequence Select training staff.needs assessment job description competenciesstrategies. to panel

of experts.

Develop Prepare List domains Describeimplementation plans final draft and competencies. training alternativesfor training system of jobdevelopment. description.

Identify potential Identifyaudiences. trainees.

9

Mission Phase Continued

The mission of an organization describes its basic societal func-tion in terms of products and services provided for its clientele. Themission of CDHS provides the organization with a foundation fordeveloping large-scale programs that provide high quality trainingfor New York State human service workers.

As illustrated in Figure 2, the Mission phase includes goal selec-tion, job analysis, competence development, and needs assessment.The process, which progressively moves from higher order, moregeneral concepts to statements of greater specificity, is straight-forward and logical. During the first step, management selects train-ing goals congruent with the broad mission of the department. Agoal, quite simply, is the purpose toward which an endeavor isdirected. These goals are reviewed formally and informally by othermanagers, supervisors, and staff. Goals are modified and refined un-til a consensus is reached regarding their appropriateness. This pro-cess is conducted periodically to provide assurances that statedgoals meet current, ongoing needs.

Job analysis begins with an examination of tasks involved in spe-cific job titles based on Civil Service job descriptions and functionalresponsibilities. Descriptions for similar job titles from other agen-cies and organizations are collected and examined. Concurrently,supervisors study staff to identify behaviors that relate to compe-tent performance. Information is collected from other sources in-cluding published research, program evaluations, staff and clientfocus groups, and staff performance assessments. This pool of in-formation is used to generate a competency-based job descriptionthat is reviewed and validated by staff and supervisors. Their sug-gestions are incorporated in an edited version that is recycledthrough the review process.

Finally, a panel of experts review and modify the job descriptionuntil it accurately describes competencies required for the job. Forthe purposes of training, the job description is based on compe-tencies which are related clusters of specific attitudes, skills, andknowledge. These competencies are the basis for all subsequenttraining activities.

10

Mission Phase Continued

Needs assessment focuses on the gaps in performance which existbetween competent and ineffective staff. Identified competenciesare used as Likert items in the needs assessment. Individual staffmembers are evaluated by themselves and their supervisors on eachcompetence based on level of skill and the relevance of each com-petence. Evaluators produce individual training needs assessmentsand group profiles from these needs assessment. This informationis used to develop the training curriculum. The needs assessmentphase includes preliminary activities related to training, such asselection of key staff and contractors and selection of potentialtrainee populations.

11

Program DesignFigure 3Program Design Phase of MTS

Trainer Preparation Curriculum Design Test Design Test Analysis

Describe trainer Design a Describe the Field-testtasks and activities. curriculum model. purpose of the test. instrument.

List desired Specify Form a test Obtain feedbacktrainer competencies. learning conditions. development after field-test.

committee

Prepare trainer Specify types Describe the Determinejob descriptions. of learning. population reliability.

to be tested.

Recruit Describe format Develop format Compute indexqualified trainers. of test items. for objectives. of item

discrimination.

Assess competence Train staff to Estimate Computeand background. write objectives. the number item difficulty.

of items required.

Provide trainers Write objectives Develop Determinewith background for competences. an initial pool mastery levels.information. of item.

Brief trainers Sequence objectives. Match items Revise test.on program goals. to competencies.

Provide practice Identify Add items Administeropportunities. training activities. when appropriate. revised test.

Develop Adapt for Complete Reassess test.trainer readiness. adult trainee item editing.

characteristics.

Design a peer Develop Establish Completecoaching program. training prototype. content validity. final revisions.

Observe Field-test Write instructionstrainer performance. prototype for scoring and

and products. administration.

Provide enrichment Assess performance Prepare table shells.opportunities. of field-test subjects.

Modify training Prepare draftprototype. of instrument.

12

Program Design Phase Continued

The Program Design phase includes trainer and trainee preparation,curriculum design, test design, and test analysis. Specific proce-dures for each of these steps are contained in Figure 3.

Trainer preparation begins when program managers describe trainerresponsibilities. Consistent with the concept of competency-basedtraining, the job description is based on competencies required forhighly successful training. Much of the success achieved by the sys-tem is a direct result of recruiting highly qualified staff. Trainershave excellent academic credentials, as well as substantial profes-sional experience directly related to job-related topics presentedduring training. Strong interpersonal and presentation skills are es-sential, as well as organizational support through supervision, train-ing, and enrichment opportunities.

During curriculum design, goal-related competencies are refined intoa more detailed level of specificity when training objectives (i.e.,attitudes, skills, and knowledge) are prepared. This is a critical stepbecause all training and evaluation materials relate to, and flowfrom, these objectives.

The level of specificity (i.e., unit of analysis) required for a train-ing objective is an important issue which requires greater elabo-ration than can be provided in this monograph. However, the mostimportant point related to this concept involves the job level forwhich training is designed. For beginning workers, training objec-tives are very specific and focus on entry level skills. For traineesat the supervisory and management levels, objectives become pro-gressively more general and inclusive. Cognitive and educationalpsychologists have discussed this concept from different perspec-tives including “chunking,” (Mislevy, 1993) “chaining,” and “asso-ciation” (Gagné & Briggs, 1974). Essentially, the concept impliesthat people learn complex tasks by mastering a series of specificskills required to perform the task. Once a complex task is perfected,it becomes a specific skill required to perform even more complexbehaviors. For example, first grade children must recognize lettersbefore they learn to read. As they mature, they master increasinglymore complex reading skills. In a similar manner, the behaviors re-quired for entry level workers differ in complexity from those re-

13

Program Design Phase Continued

quired for supervisors. Likewise, the behaviors required of supervi-sors are less complex than those required for management levelstaff. As job level increases, competencies move to higher cogni-tive levels of analysis, synthesis, and evaluation.

A competency-based curriculum contains the entire range of directedexperiences completed by trainees. Curriculum activities are highlystructured and flow directly from the competencies and trainingobjectives. Management and staff select the objectives on whichtraining will be based, since in most circumstances training willnot include every objective from the total universe of those avail-able. As noted earlier, a competence is a cluster of related objec-tives, and each objective is a specific attitude, skill, or knowledge.Since the number of specific objectives is usually extensive, train-ers select and sequence objectives on which training will be basedand develop a prototype curriculum.

This monograph does not discuss techniques for writing specificinstructional objectives because these have been described in otherpublications (Eisner, 1969; Gronlund, 1970; Mager, 1962; McCowan& Wegenast, 1996; Popham, 1969; 1990). However, protocols forwriting objectives must be followed carefully because objectivesform the foundation for all curriculum and evaluation development.

Test design and analysis flow directly from the objectives includedin the training prototype. At this stage, the focus is on level 1 (re-action) and level 2 (learning). The first step is to describe the pur-pose of the test, after which a test development committee includ-ing both curriculum and evaluation specialists is formed. Beforeitems are written, the committee considers the population to betested and the types of items that will be used. Different item for-mats (e.g., multiple choice, open-ended) have different strengthsand weaknesses. For example, multiple-choice items are easily scoredand more reliable than open-ended questions, but less flexible inobtaining subjective, constructed responses.

After the committee estimates how many items are required to as-sess learning outcomes validly, committee members, trainers, andother content specialists develop an item pool. Items are matched

14

Program Design Phase Continued

to training objectives which increases test validity and providesassurances that items truly represent content. Evaluation special-ists edit items for consistency, validity, and appropriateness. Thenumber of items selected per competence reflects the amount oftraining time devoted to that topic. Draft instruments are field-tested with persons comparable to those who will be trained. Inaddition to completing the test, these individuals will give unstruc-tured feedback regarding the test. Based on this feedback and anitem analysis, the test is revised and administered to a differentsample. The recycling process continues until the committee de-termines that the test is sufficiently valid and reliable. After trainingbegins, test items are scrutinized to determine if they are adequatefor the assessment.

15

TrainingFigure 4Training Phase of MTS

Pre-Training Formative Trainee JobStrategies Assessment Assessment Transfer

Collect baseline Review Administer Develop aperformance data. evaluation data. pretest. re-entry program.

Involve staff in Observe selected Administer Provideprogram planning. training sessions. posttest. psychological

support.

Orient staff on Conduct trainee Assess Provide practiceprogram goals. focus groups. cognitive gains. opportunities.

Describe why Observe Assess Reduce initialtraining is training sessions. attitudes. job pressure.important.

Review program Compare planned Examine ratings Reinforceobjectives. with actual procedures. of competence. competent

performance.

Encourage trainees Identify difficulties Examineto complete program encountered. subjective comments. Provide rolemodels. models.

Have trainees Assess trainee Complete training Reward successfulcomplete pre-training interest in program. report. performance.transfer activities.

Describe benefits Identify Mail posttest Provide follow-upderived from training. unanticipated results and support.

effects. remedial material.

Conduct the Analyze traineetraining program. performance.

Provide feedbackto trainers andsupervisors.

16

Training Phase Continued

Training is the most publicly visible component of the system. Fig-ure 3 summarizes the specific procedures utilized during this phase.Pre-training strategies focus on trainee recruitment and selection.By describing potential program benefits, trainees are encouragedto complete the program. Trainees review pre-training backgroundmaterials and complete pretests based on these readings and ac-tivities. Formal training ends as trainees complete posttests andcontract for involvement in job transfer activities with supervisorsor job mentors. After training, trainees receive reports that describetheir posttest performance, including suggestions for improving per-formance on competencies which they did not master during train-ing.

As training is conducted, formative assessment data are collectedand this information is fed back into the system to improve andrefine the program. The process includes formal and informal dis-cussions and focus groups conducted by trainers and observationsby supervisors. Program staff examine this information to determineif the actual training procedures match those that were planned.They also make appropriate modifications to improve practices thatare less effective than they anticipated. Level 1 (reaction) data arecollected using Likert items that examine specific aspects of train-ing (e.g., trainer enthusiasm, quality of instructional materials).Level 2 (learning) assessment is accomplished by pretesting andpost-testing attitudes, knowledge, and skills primarily with multiple-choice items. This information is used to prepare individual traineereports described earlier, as well as reports described in the sec-tion on summative evaluation. Subjective comments that describetrainee reactions are also collected and sent directly to trainers fortheir review. Based on these comments, trainers refine their coursesappropriately.

Job transfer is a critical aspect of successful training. Attractive,well-received training which does not improve job performancewastes time and resources. The model includes pre-training activi-ties in which supervisors prepare staff by describing performanceexpectations that will result from training. Post-training activitiesinclude supervisors who engage staff in performance contracts, re-medial training, coaching, mentoring, and formal training which

17

Training Phase Continued

reinforces and rewards improved performance. This feedback maxi-mizes the transfer of training and is an essential component of com-petency-based training. Supervisors also complete related trainingto improve their competence to work with staff.

Supervisors maximize job transfer by providing staff with psycho-logical support and resources that support them as they practicenewly learned behaviors. Management staff are in the best posi-tion to serve as role models and to reward and reinforce compe-tent performance. Effective follow-up support during this post-train-ing period contributes substantially to successful worker perfor-mance. Follow-up activities involve formative and summative evalu-ation strategies to determine which activities and job conditionspromote transfer to performance.

18

EvaluationFigure 5Evaluation Phase of MTS

Summative Follow-up Cost-Benefit ImpactEvaluation Analysis Analysis Assessment

Complete data Determine Describe program Match training goalscollection. procedures. activities. with agency outcomes.

Analyze data. Develop Specify program Identify data sourcesinstruments. limits. for agency outcomes.

Compare data on Select sample. Specify activity costs. Assess transferselected variables. of training.

Mail individual Develop database Identify non-monetary Examine traineetrainee reports. structure. values. performance data.

Mail group Conduct survey, Identify program Survey traineeperformance interviews outcomes. supervisors.reports. or focus groups

Interpret data. Analyze trainee Develop Assess traineeattitudes toward assessment measures work performance.training.

Evaluate Analyze trainee Gather data. Assess impactinstructional attitudes on work unit.products. toward job.

Modify Analyze trainee Analyze data. Assess agencyinstructional attitudes impact.products. toward clients.

Disseminate Complete Complete Assessproducts. database entries. cost-benefit report. community impact.

Complete Produce Analyze outcomeevaluation report. follow-up report. measures.

Disseminate Disseminate Complete report.evaluation report. follow-up report.

Revise training Refine trainingprogram. based on report.

19

Evaluation Phase Continued

Figure 4 contains detailed procedures for the Evaluation phase.Summative evaluation is based primarily on data from pretests andposttests administered by trainers. Tests are developed and vali-dated using procedures described earlier. Instruments include itemson demographics, self-assessment of competence, attitudes towardstraining, and open-ended subjective responses. Before training be-gins, each participant completes a pretest which provides baselinedata regarding performance on cognitive items based on the cur-riculum, as well as each person’s self-assessment of competencein areas related to the training program. After completing train-ing, each participant completes a posttest that assesses masteryfor the total test and for specific content areas. Tests are producedusing Teleform which creates a form which can be optically scannedusing a conventional flatbed scanner. Staff scan data into a data-base and use Microsoft Access programs to produce nine differenttypes of reports.

One of these reports is a “Trainee Report” produced for each par-ticipant. It lists content areas, mastery levels of satisfactory or un-satisfactory, and brief messages describing what the score means.An unsatisfactory performance level for a content area directs par-ticipants to review specific content from training materials distrib-uted during training.

County DSS training directors receive copies of an Agency Reportthat lists agency workers who completed training, shows masterystatus for each person for each content area, and pretest-posttestpercentage improvement. Training directors receive copies of train-ing materials, recommended assignments for content areas in whichtrainee performance was unsatisfactory, and training suggestionsfor related training activities. This report, in combination with theTrainee Reports, enables staff developers and supervisors to developongoing, individualized training programs to help caseworkers mas-ter attitudes, skills, and knowledge required for competent perfor-mance.

Additional reports based on trainee performance are produced fortrainers, training supervisors, and state-level supervisors. These in-clude the following:

20

Evaluation Phase Continued

Trainer reports contain a summary and interpretation oftrainee comments. These include item analyses thatdescribe how well trainees performed on each item.Evaluation staff meet with trainers as part of the formativeevaluation process to improve deficient items and todetermine if instruction should be modified to improvetrainee performance.

Training organization reports summarize performance andattitude results for groups of trainees classified by coursescompleted, demographic characteristics, geographic re-gions, and agency. Summary reports show cumulativeachievement of specific training units and trainees. Staffuse this information for formative and summative evalua-tion, as well as for supervisory purposes.

State-level reports include copies of selected organizationreports, as well as cumulative reports for similar trainingconducted by different organizations across the state. Level3 (behavior) and level 4 (performance) data are collectedduring follow-ups of trainees and their supervisors. Thesefollow-ups examine whether training helped staff masterjob-related competencies and whether their performancehad improved after training. Follow-up, which focuses onhow well training helped caseworkers perform more effec-tively on their jobs, is an integral component of the model.

Level 4 (results) data are collected during cost-benefit analysis andimpact assessment. Cost-benefit analysis is “the process of addingup the cost, subtracting the costs, and choosing the alternativethat maximizes the net benefits” (Gramlick, 1990, p. 2). In otherwords, how much money was saved by doing it this way? Assess-ing the amount of money saved by conducting training is a diffi-cult, but essential component of the system. Unfortunately, the pro-cedures involved in conducting cost-benefit studies are often costlythemselves, and program directors are reluctant or unable to re-duce training to fund such studies. However, if training programsdevoted more money to research during the early stages of train-ing until a judgment was made regarding the program effective-ness or ineffectiveness, better evidence could be assembled.

21

Evaluation Phase Continued

However, with proper planning it is possible to conduct well-de-signed cost-benefit research. Several CDHS studies provide modelsfor this type of research. Floss (1990) compared the costs of aninnovative HMO program and conventional care for Medicaid clients.A study in Erie County (McCowan, 1991) showed that clients pre-ferred a lower cost managed care program to traditional servicesprovided at the county level.

Impact assessment is the last step in the system. It focuses ontransfer of training which is the effect of training on trainee andagency performance (Broad & Newstrom, 1992) and corresponds toKirkpatrick’s (1994) Level 4 evaluation. Data can be collected us-ing surveys, focus groups, personal interviews, telephone interviews,and site visits, as well as from multiple sources including local agen-cies, summative evaluations, benefit-cost analyses, and impactanalyses. This information is recycled by management staff duringthe Program Design phase as they plan new training cycles. Im-pact assessment is often limited by the same constraints which af-fect cost-benefit studies — namely, cost, time, and data collec-tion difficulties, but it is possible to conduct this type of assess-ment.

CDHS conducts semi-annual follow-ups for the total population oftrainees who completed programs during the preceding six monthperiod using scannable Teleform instruments. In 1997 over 20,000trainees and their supervisors were involved in these surveys. Eachfollow-up form contains the trainee’s name, the training programcompleted, and dates of training, in addition to the following threequestions rated on a 5-point scale ranging from ”strongly disagree“to ”strongly agree.“

”I learned specific, job-related skills and knowledge.“

”Since completing the training, I have used the skills andknowledge on the job.“

”This training helped me do my job better.“

The survey includes from three to five additional questions basedon specific competencies on which training was based. Trainees in-

22

Evaluation Phase Continued

dicate whether they used these behaviors effectively as a result ofcompleting training, and their supervisors complete a similar formto rate the trainees on the same competencies. For example, thefollowing items were used for the FosterParentScope: Train theTrainer program:

”Have you used these behaviors effectively as a result ofcompleting the training?“

”Use FosterParentScope materials to develop trainingprograms.“

”Develop workshops with consultants using FosterParentScopematerials.”

Trainees also indicate whether they would change, discontinue,revise, or offer the training in exactly the same format. They alsocomplete the following open-ended questions:

“In what ways did your agency support your use of thesebehaviors?“

”What obstacles prevented you from using these behaviors?

“Please make any additional comments you feel areappropriate.”

Respondents mail the forms or fax them directly into a database,and CDHS produces reports using Microsoft Access programs.

CDHS has completed other impact assessments. McCowan andWegenast (1989) used a Solomon 4-group pretest-posttest designto show that training had a significant, positive effect on the abilityof caseworkers to identify and write specific objectives. Other re-search projects designed by CDHS will be completed in 1998. Onestudy compared the effects of two different spend-down programson low-income clients in two similar, contiguous counties. Anotherstudy used a time-series design to compare past, current, and fu-ture employment data to assess the impact of an employment ini-tiative for clients on public assistance. A third study, described ear-lier, examined the impact of training on the performance of 400New York City risk assessment workers.

23

ConclusionThe CDHS training management system is a set of practical, sys-tematic procedures designed to manage and evaluate training pro-grams. It is a complete model that meets the necessary criteria forcompetency-based training, namely:

Clear job descriptions and program outcomes.

Needs assessments based on job-related competencies.

Structured hierarchy of domains, competencies, and objectives.

Posttest assessment of trainee performance compared withclear criteria.

Remedial training and OJT mentoring to assure trainee masteryof essential material.

The system conforms to the criteria required for a system as de-scribed earlier in this paper. It integrates the complex proceduresrequired to develop, deliver, and evaluate competency-based train-ing. It provides a general model for a wide variety of different train-ing and educational programs in the public and private sectors.

The system clearly communicates how its goals are achieved andclarifies the relationships that exist among goals, procedures, andoutcomes. It describes how assessment measures are designed andutilized and provides an iterative cycle of development, testing, andredesign. It includes a full array of computer-supported systemsto manage and report training data, ranging from the schedulingand planning of training through evaluation and follow-up at thereaction, learning, performance and results levels. Finally, the ef-ficacy of the system has been tested and refined over a 20-yearperiod in a large, statewide social services training program. Its pro-cedures could readily be adapted to manage a broad spectrum oftraining programs both in the public and private sector.

24

ReferencesBertalanffy, L. von. (1968). General system theory. New York:George Braziller.

Broad, M. L., & Newstrom, J. W. (1992). Transfer of training:Action-packed strategies to ensure high payoff from traininginvestments. Reading, MS: Addison-Wesley Publishing.

Eisner, E. W. (1969). Instructional and expressive educationalobjectives: Their formulation and use in curriculum. In W. J.Popham et al. (Eds)). Instructional objectives (AERA CurriculumEvaluation Monograph 3). Chicago: Rand McNally.

Kirkpatrick, D. L. (1994). Evaluating training programs: The fourlevels. San Francisco: Barrett-Koehler.

Floss, F. (1990). A return for investment study of Medicaid HMOand private-care services in Monroe County. Buffalo, NY: Centerfor Development of Human Services. (xeroxed).

Gagné, R. M., & Briggs, L. J. (1974). Principles of instructionaldesign. New York: Holt, Rinehart and Winston.

Gibson, R. E. (1960). The recognition of systems engineering. InC. D. Flagle, W. H. Huggins, & R. H. Roy (Eds.). Operations re-search and systems engineering. Baltimore: Johns Hopkins Press.

Gramlich, E. M. (1990). A guide to benefit-cost analysis.Englewood Cliffs, NJ: Prentice-Hall.

Gronlund, N. E. (1970). Stating behavioral objectives for classroominstruction. New York: Macmillan.

Mager, R. (1962). Preparing instructional objectives. Palo Alto, CA:Fearon Press.

McCowan, R. J., McGregor, E., & LoTempio, S. (1983). ProjectCREST: Criterion-referenced evaluation/social services training.Educational Evaluation and Policy Analysis, 5, 13-17.

25

References Continued

McCowan, R. J., McGregor, E., & LoTempio, S. (1984). Compe-tency-based evaluation of social services training. Journal ofContinuing Social Work Education, 2, 12-13; 31.

McCowan, R. J. (1990). STRIDE evaluation: A comparison of casemanagement and traditional caseworker services. Buffalo, NY:Center for Development of Human Services.

McCowan, R. J., & Wegenast, D. P. (1989, April). Improving theperformance of child protective services workers. Proceedings ofthe American Educational Research Association, Chicago, IL.

Mislevy, R. J. (1993). Foundations of a new test theory. N.Frederiksen, R. J. Mislevy & I. I. Bejar (Eds.). Test theory for anew generation of tests. Hillsdale, NJ: Lawrence Erlbaum Associ-ates, pp. 19-39.

Popham, W. J. (1969). Objectives and instruction. In W. J.Popham et al. (Eds.). Instructional objectives (AERA CurriculumEvaluation Monograph 3). Chicago: Rand McNally.

Popham, W. J. (1990). A twenty-year perspective on educationalobjectives. In H. J. Walberg & G. D. Haertel. (Eds.). The interna-tional encyclopedia of educational evaluation (pp. 189-200).Oxford: Pergamon Press, 1189-200.

Schlesinger, J. R. (1963). Quantitative analysis and nationalsecurity. In G. Smale (Ed.). National security management: Acommentary on defense management. Washington, D.C.: IndustrialCollege of the Armed Forces.

Silvern, L. C. (1965). Systems engineering of education I: Theevolution of systems thinking in education. Los Angeles: Educationand Training Consultants.

Smith, M. S., & O’Day, J. (1991). Systemic school reform. In S. H.Fuhrman & B. Malen (Eds.). The politics of curriculum and testing(pp. 233-267). London: Falmer Press.

Wildavsky, A. (1964). The politics of the budgetary process. Bos-ton: Little, Brown.

byRichard J. McCowanSheila C. McCowan

Center for Development of Human ServicesResearch Foundation of SUNY

State University College at Buffalo1695 Elmwood Avenue

Buffalo, New York 14207-2407716.876.7600 (Voice)716.876.2201 (Fax)

ItemAnalysisfor Criterion-

Referenced Tests