evaluating training provision

10
FOLIOz Facilitated Online Learning as an Interactive Opportunity in Australia Back to FOLIOz homepage Back to InfoSkills2 homepage Evaluating Information Skills Training Courses (InfoSkills2) Briefing September 2008 Evaluating training provision Introduction Evaluation of training provision works at a number of levels. First of all you will want to ask yourself about the effectiveness of your training delivery (Webb & Powis, 2004). The easiest method for this is self-evaluation. This involves thinking about what has worked well and what hasn’t worked so well. You will find it helpful to use a formal tool or prompt-sheet to provide a structure for your self-evaluation. The ‘Self-Evaluation Questions’ review form from the NLH DLNet is an ideal way of reviewing your own training. The form is available from http://foliozinfoskills2.pbwiki.com/f/self-review.doc . Of course when you find yourself within the heat of a training situation it is very difficult to stand back mentally and attempt to make an objective assessment of your training performance. You may therefore find it helpful to employ some form of recording technology on occasions – either an audio tape recording where you can capture something of the group dynamics and interactions or video tape recording where you can also assess the body language and physical positioning of both yourself and the participants. You can either review your performance yourself (reflection-on-action) or review the tape with a colleague or mentor. This method is typically

Upload: rahmat-hidayat

Post on 07-Dec-2015

215 views

Category:

Documents


1 download

DESCRIPTION

Evaluasi

TRANSCRIPT

Page 1: Evaluating Training Provision

FOLIOzFacilitated Online Learning as an Interactive Opportunity in Australia  

Back to FOLIOz homepageBack to InfoSkills2 homepage

 

Evaluating Information Skills Training Courses (InfoSkills2)  

Briefing September 2008

 

Evaluating training provision

Introduction

Evaluation of training provision works at a number of levels. First of all you will want to ask yourself about the effectiveness of your training delivery (Webb & Powis, 2004). The easiest method for this is self-evaluation. This involves thinking about what has worked well and what hasn’t worked so well. You will find it helpful to use a formal tool or prompt-sheet to provide a structure for your self-evaluation. The ‘Self-Evaluation Questions’ review form from the NLH DLNet is an ideal way of reviewing your own training. The form is available from http://foliozinfoskills2.pbwiki.com/f/self-review.doc.

Of course when you find yourself within the heat of a training situation it is very difficult to stand back mentally and attempt to make an objective assessment of your training performance. You may therefore find it helpful to employ some form of recording technology on occasions – either an audio tape recording where you can capture something of the group dynamics and interactions or video tape recording where you can also assess the body language and physical positioning of both yourself and the participants. You can either review your performance yourself (reflection-on-action) or review the tape with a colleague or mentor. This method is typically used by general practitioners when reviewing their consultation style.

This brings us to a third popular method, employed by universities as a way to evaluate teaching by lecturers – known as peer review of teaching. This can be conducted by an immediate colleague or by someone else whose opinion you respect. As with self-evaluation it is useful to have a formal structure or tool so that expectations from the evaluation are shared by both the assessor and the person being assessed and so that there is comparability between reviews conducted at different stages in your development.The wider context for evaluation

As above, most activities on the InfoSkills2 course have worked from the twin assumptions (i) that training is a good thing and, (ii) that training is therefore worth doing. While few argue with the first premise – it is after all a

Page 2: Evaluating Training Provision

fundamental principle for virtually all education, continuing professional development or lifelong learning – economists remind us of the concept of the “opportunity cost”. Opportunity cost refers to the cost, not simply in monetary or even tangible terms, of choosing to do one activity at the expense of missing an opportunity to pursue another (http://www.bristol.ac.uk/fec/glossary.html). So while you and your colleagues are spending your time training your users what else might you be doing instead? Indeed what else might they be doing?

Whether training is worth doing is more problematic. Here we refer not simply to costs, in terms of income and expenditure, but also to benefits. Benefits may be considered from a variety of perspectives and may or may not be easily quantifiable. Above all they are inextricably tied up with values – two different organisations, or even two managers within the same organisation, presented with an identical costing proposal, may arrive at a different verdict depending on different personal or organisational values.

Level Four of Kirkpatrick’s levels (mentioned in a previous briefing) attempts to look at business results that accrue because of training and he describes this as “the most important step and perhaps the most difficult of all.” (Kirkpatrick, 1996). Evaluation at this level may well require more sophisticated approaches such as use of a control group, a suitable outcome measurement period, repeated measures including (but not exclusively) a before and after approach and a holistic view of both costs and benefits. Approaches to evaluating training provision may differ in the extent to which they handle costs (both what is included and how this is measured), benefits (again what is included and how this is measured), and, indeed whether they focus on the one, the other, or indeed both sides of the “equation”. Evaluation of training provision therefore needs to take into account both the framework being used for the evaluation (we consider below the Kirkpatrick and CIRO Models) and the costing model that underpins the evaluation together with any assumptions that may go with it (e.g. seeking a return on investment, breaking even, or generating a modest or even a commercially-significant operating profit). The benefits of training need to be realised in monetary terms (Tamkin, 2005) whether they come from increased productivity, more motivated employees or a reduction in staff turnover. Critically we have to determine the period over which these benefits are realised.

The CIRO approach to evaluation

The CIRO approach to evaluation covers all four aspects of the training cycle (YourPeopleManager.com, 2004):

C-context or environment within which the training took place I-inputs to the training event R-reactions to the training event O-outcomes

Context - Evaluation here goes back to the reasons for the training event. How was the training opportunity specified? How were training needs identified and analysed?

Page 3: Evaluating Training Provision

Inputs - Evaluation here looks at the planning and design processes, which led to the selection of trainers, programmes, employees and materials. Determining whether the inputs were appropriate to the training event is critical to its success. Suppose, for example, your beginners’ literature searching course attracted all the proficient database searchers this would constitute a waste of both time and money.

Reactions - Evaluation methods here should be appropriate to the nature of the training undertaken. How did the learners react to the training and was the training course relevant to their roles? Assessment might also look at the content and presentation of the training event to evaluate its quality.

Outcomes- To what extent has the learning been transferred to the workplace? This is easier where the training is concerned with hard and specific skills but is more challenging where less quantifiable competencies including behavioural skills are involved. If performance is expected to change as a result of training, then the evaluation needs to establish the initial performance level of the learner.

This evaluation framework, evaluating the context, inputs, reactions and outcomes to training and development, will be complemented by approaches to costs as mentioned below in order to produce a cost/benefit analysis.

Measuring Costs

Although measuring costs seems relatively straightforward the challenge is to decide what exactly to include or exclude (Tamkin, 2005). Consider the following five categories of cost incurred in developing and delivering a single training programme (Phillips, 2003):

1. Needs assessment. Salaries and employee benefits of the training staff involved along with meals, travel, incidental expenses, supplies and expenses, printing, equipment, and other miscellaneous items.

2. Program development Again these include salary and benefits and other expenses listed above plus acquisition cost when a packaged program is purchased from a supplier. Needs assessment and development costs are both pro-rated over the life of the program until it is revised.

3. Delivery These costs accounted for when the program is evaluated include participant costs, instructor costs, meals, travel, lodging, program materials and supplies, facility costs and other miscellaneous costs.

Page 4: Evaluating Training Provision

4. Evaluation. Evaluation is frequently omitted from the cost calculation. However if evaluation is part of the process (even a Level 1 end-of-course questionnaire) its cost is part of the investment. When a very comprehensive evaluation takes place pro-rating the evaluation may be feasible. However many evaluations including ROI impact studies are expensed to a particular course or programme.

5. Training overhead. While this cost is usually small on a per program basis it is important to account for people who support the process including anyone connected with the training function. This cost is usually found in the general training budget and should be pro-rated across all programmes.

Ask yourself: When costing a course do I routinely consider time spent establishing the need for the course (Item 1), the costs of evaluating the course [e.g. designing and analyzing an evaluation questionnaire] (Item 4) or the costs of the person who compiles the list of participants or sends emails or venue instructions (Item 5). Although you no doubt include the cost of your own time as a trainer we would be very surprised if you include the cost of the participants’ time. Consider that if you train six managers who are ordinarily earning £ 500 for a morning’s work you are already £ 3,000 down before you switch on a light or open your mouth! No pressure there then!

Return on Investment

Phillips (1997) extends Kirkpatrick’s framework further, creating Learning Measurement (Level 5), that is the need over time to measure the financial value impact of training, the return on investment. This again requires the sophisticated evaluation of Level 4 but with the specific additional need to determine the direct costs of the training, to measure productivity or performance before and after training, to measure productivity or performance increase, to translate the increase into a dollar (or sterling) value benefit, to subtract the dollar value benefit from the cost of training and thus calculate Return on Investment (ROI).

Return on Investment =

cost savings – cost of training/coaching

cost of training

While it is not always possible to gather “hard data” for each item of benefit it should be possible, having identified areas of potential benefit, to ask key stakeholders to assign them a cash value. This is analogous to “willingness to pay” methods used to assess patients’ values for health treatments. Rather than taking a perfectionist view about what can or cannot be assessed it is best to take a more pragmatic view as all cost/benefit analyses are “littered with assumptions of some sort” (Bee, 1994).

Page 5: Evaluating Training Provision

An illustrated summary of the Phillips methodology is available at: http://www.trainingreference.co.uk/training_roi/roi3graphic.htm . Such a comprehensive approach begins with planning the project (an “Impact Study”). It moves into collection, analysis and reporting of data making it possible to measure the 5 Kirkpatrick/Phillips levels of learning. Thus the following key performance components comprise the Balanced Scorecard approach (Kaplan & Norton, 1996).

Level 1 – Satisfaction Level 2 – Learning Effectiveness Level 3 – Job Impact Time to Job Impact Barriers to Use Post Training Support Level 4 Business Results Job Performance Change Business Drivers Impacted By Training Level 5 Return on Investment

Benchmarking

You will find it helpful to benchmark your training activities (Bramley, 1996) when deciding if you are offering value for money. However as our section above illustrates there is considerable variation in what trainers include in their costs. Generally speaking you will find it most useful to itemise the individual components of the training, not just at the level of the five headings given above but at an even more specific level to allow true comparability. Of course internal benchmarking can also be used to evaluate the effectiveness of different training methods used to deliver training on the same topic. Appropriate metrics will include “cost per participant”, “cost per course”, “cost per participant hour”, “percentage of uptake/within whole organisation or for different staff groups” etcetera.

An Illustrative ExampleAt the Evidence Based Librarianship Conference in Brisbane, Alison Brettle and colleagues (2005) attempted to analyse the cost of different types of training to libraries. One to one pre arranged training worked out at £19.92 per session, one to one ad hoc training at £13.47 per session and small group training at £32.33 per session. Although one to one ad hoc training was estimated as the least costly option per session, health professionals typically attend such one to one ad hoc sessions 2.67 times more frequently than other types of training, increasing the overall cost of ad hoc training. Here it is important to specify whether the preferred cost measure is per person or per session. For example, whilst small group training is the most costly in terms of the cost to the library per session (the time spent on this type of session is longer and sessions are typically delivered by librarians on a higher salary than those who deliver one to one ad hoc training) the cost per person per session is less costly than one to one pre arranged training. With this cost data to hand it becomes possible to examine the reported benefits for each type of session and thus to evaluate training provision on a cost-benefit basis.

Page 6: Evaluating Training Provision

ConclusionTraining evaluation typically captures participants’ satisfaction with the training received but does not identify how individual and corporate performance has improved as a result of the training (Metropolitan Police, 2003). Training should not be commissioned without consideration of its impact. On the other hand inappropriate choice of cost measures or an unsuitable evaluation framework has the potential to do more harm than good (Tamkin, 2005). Evaluation of training provision is frequently either done badly or not at all. Calculations of cost versus benefit depend on a host of assumptions which are frequently lost in zealous attempts to oversimplify the message for wider consumption. The techniques briefly mentioned above command their own place in the training evaluation toolkit but their selection should also demonstrate an in-depth awareness of the characteristics and complexity of the training itself, the host organisation and its systems and processes. Caveat evaluator!

References:Bee F, Bee R (1994) Training Needs Analysis and Evaluation, IPD

Bramley, P (1996) Evaluating Training Effectiveness: Benchmarking Your Training Activity Against Best Practice. McGraw-Hill: London

Brettle, A (2007) Evaluating information skills training in health libraries: a systematic review. Health Information and Libraries Journal, (24)1:18-37.

Brettle A, Hulme C, & Ormandy P. (2005) Evidence to support strategic decision making for health care information services: the Effective Methods of Providing InfoRmation for patIent Care (EMPIRIC) project. http://conferences.alia.org.au/ebl2005/Brettle.pdf

Kaplan R S, Norton D (1996), The Balanced Scorecard, Harvard Business Press.

Kirkpatrick, D.L. (1996). Evaluating Training Programs: The four levels, Berrett-Koehler.

Metropolitan Police (2003). Best Value Review of Traininghttp://www.met.police.uk/foi/pdfs/how_are_we_doing/archive/2003/best_value_review_of_training.pdf

Mitterer S (2003) Does Training Work? Extensor Ltd.http://www.extensor.co.uk/articles/does_training_work/does_training_work.pdf

Phillips JJ (1997). Return on Investment in Training and Performance Improvement Programs, Houston: Gulf Publishing.

Webb J & Powis C (2004). Chapter 8: Feedback and evaluation In: Teaching Information Skills. Theory & Practice. London: Facet Publishing.

Page 7: Evaluating Training Provision

Further resources and reading:

The Outcomes Toolkit 2.0 is a four step process of conducting outcome-based evaluation. http://ibec.ischool.washington.edu/static/ibeccat.aspx@subcat=outcome%20toolkit&cat=tools%20and%20resources.htm

Donald Kirkpatrick's Learning Evaluation model; review, remaining material, design and code Alan Chapman 1995-2005 http://www.businessballs.com/kirkpatricklearningevaluationmodel.htm Also includes a free Excel training evaluation form.

Business Link: Evaluate your Training http://www.businesslink.gov.uk/bdotg/action/layer?topicId=1074424250

Conner, ML. (2002) How do I measure return on investment (ROI) for my learning program? Training & Learning FAQs. Learnativity.com. April 5, 2002. http://learnativity.com/roi-learning.html

Kearns P, Miller T (1997) Measuring the impact of training and development on the bottom line, FT Management Briefings, Pitman Publishing, LondonNewby A C (1992) Training Evaluation Handbook, Gower

‘Training the Trainer’ resource pack from the International Council on ArchivesYou can get some ideas about what questions to ask yourself about your training from the ‘Trainer Evaluation’ section of chapter 9 of the resource pack at http://www.ica-sae.org/trainer/english/p19.htm