creating an architecture of assessment: using benchmarks to measure library instruction progress and...

29
Creating An Architecture of Assessment: using benchmarks to measure library instruction progress and success Candice Benjes-Small Eric Ackermann Radford University

Upload: jennifer-haynes

Post on 28-Dec-2015

219 views

Category:

Documents


0 download

TRANSCRIPT

Creating An Architecture of Assessment: using benchmarks

to measure library instruction progress and success

Candice Benjes-Small

Eric Ackermann

Radford University

“So, Candice, how many library sessions have we taught this year?”

0

50

100

150

200

250

300

350

400

2001 2003 2005 2006

BI sessions

Look at all these instruction librarians!

But…

• Curricular changes

• Librarian burnout

• Students reported BI overload

On the other hand

• University administration wants to see progress

Looking for alternatives

• Number of sessions plateau

• Scoured literature

• Attended conferences

• Networked with colleagues

Our environment

• Public university

• 9000+ students

• Courses not sequenced

• Instruction built on one-shots

Macro look at program

• Focus on us, not students

• Search for improvements over time

• Student evaluations as basis

A little bit about our evaluation form

Goals

• Provide data to satisfy three constituents

– Instruction librarians: immediate feedback

– Instruction team leader: annual evaluations

– Library Admin: justify instruction program

Background

• Began in 2005

• Iterative process

Development

• 4-point Likert scale

• Originally had a comment box at end

• Major concern: linking comments to scale responses

Solution: Linked score and comment responses

• Q1. I learned something useful from this workshop.

• Q2. I think this librarian was a good teacher.

Inspiration for benchmarks

•University of Virginia library system use of metrics to determine success

•Targets outlined

•We would do one department rather than entire library

To learn more about UVA’s efforts, visit http://www.lib.virginia.edu/bsc/

Benchmark baby steps

• Look at just one small part of instruction program

• Begin with a single benchmark

• Identify one area to assess

• Decided to do one particular class

Introduction to Psychology

•Teach fall and spring, beginning 2006

•14 sections of 60+ students

•Shared script and PPT

•Everyone teaches over 2 days

To see our shared PPT, visit http://lib.radford.edu/instruction/intropsych.ppt

Developing benchmarks

• Selected a comment based metric for Instruction Team

• Chose class of comments: “What did you dislike about the teaching?” (Question #2)

Current benchmarks

• Partial success: 5 < 10% total comments for Question 2 are negative

• Total success: < 5% total comments for Question 2 are negative

How did we do?

Results

Success?

• Reached our desired benchmark for partial success- never quite went below 5%

• Tweaking the script again

• Continuous improvement

Scaling for your program

• Adjust the benchmark levels

• Only look at score responses (quantitative) instead of comments (qualitative)

• Adjust the number of benchmarks used

Sharing with administrators

• Team annual reports

• Stress evidence-based nature

• Use percentages, not a 4-point scale

Disadvantages

• Time intensive

• Follow through required

• Evaluation forms not easy to change

More disadvantages

• Labor intensive to analyze comments

• Results may reveal your failures

Advantages

• Flexiblity to measure what you want to know

• Provides structured goal

• Evidence-based results more convincing

More advantages

• Continuous evaluation results over time

• Data-driven decisions about instruction program

• Do-able

Contact

Candice Benjes-Small

[email protected]

Eric Ackermann

[email protected]