pb.anut-e

113

Upload: american-public-works-association

Post on 29-Mar-2016

219 views

Category:

Documents


2 download

DESCRIPTION

A nuts and bolts guide for public works professionals William B. Cook ©APWA, August 2000 American Public Works Association 2345 Grand Boulevard, Suite 500 Kansas City, MO 64108-2641 www.apwa.net

TRANSCRIPT

Page 1: PB.ANUT-E
Page 2: PB.ANUT-E

Performance Measurement in Public Works

A nuts and bolts guide for public works professionals

William B. Cook

Page 3: PB.ANUT-E
Page 4: PB.ANUT-E

Performance Measurement in Public WorksA nuts and bolts guide for public works professionals

©APWA, August 2000American Public Works Association2345 Grand Boulevard, Suite 500Kansas City, MO 64108-2641www.apwa.net

Printed in Kansas City, MO

ISBN: 0-917084-87-X

Page 5: PB.ANUT-E

CONTENTS

PrEfACE

INTrOduCT IONWhat is performance measurement? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1Why is performance measurement important? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2What’s the history of performance measurement? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3What does it take to develop and implement a good performance measurement system? . . . . . . . . . . 4

ChAPTEr 1how do we know if we are ready for performance measurement? Doing a readiness survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Step One – Answer the tough questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7Step Two – Determine your basic level of readiness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10Step Three – Checklist your current system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Step Four – Discuss it as a group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Now what? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

ChAPTEr 2Where are we headed? Checking in with your mission, vision and values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Guidance from the Public Works Management Practices Manual . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Why is strategic direction important? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Real mission, vision, values definitions and examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Two basic approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 The whole system approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 Developing or updating your mission, vision, values in one fine day . . . . . . . . . . . . . . . . . . . . . . . . . 23 Do we need a facilitator? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 The principle of alignment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

ChAPTEr 3What approach should we use? In what context do we use performance measurement? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 Accountability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 Balanced Scorecard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 Baldrige National Quality Program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 Benchmarking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 Best Practices Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 Canada Awards for Excellence Program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 Comparative Performance Measurement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 GASB Statement No . 34 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 Managing for Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 Management Practices Self Assessment and Accreditation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 Performance Budgeting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 Service Efforts and Accomplishment Reporting (SEA) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 Strategic Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 Total Quality Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 And there’s more . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

Page 6: PB.ANUT-E

CONTENTS

Chapter 4 Organizing the performance measurement effortAppoint a performance measurement program manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 Start with a performance measurement team and work plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 Identifying programs to measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 Developing a program mission statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 Identifying program performance measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 Setting targets for accomplishment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 Training program leaders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 Developing a communication plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

Chapter 5Mapping out current reporting processesWhy is mapping important? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 Using flowcharts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 Exploring possible improvements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

Chapter 6 Getting buy-inRecognizing the major obstacles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 Addressing all the key questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 Connecting with your employees, labor groups, managers and elected officials . . . . . . . . . . . . . . . . . 48 Committing to full-disclosure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 Communicating directly with employees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

Chapter 7What are the needs of the community?How do we find out what the needs of our community are? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 Customer councils . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 Customer interviews . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 Focus groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 Telephone hotlines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 Town meetings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 Surveys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

Chapter 8 Getting clear on performance measurement definitionsMany terms to consider . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 Fully defining each performance measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 Performance measurement definition options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

Chapter 9 focusing on outcomesDon’t expect everyone to get it right away . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 Some things really can’t be measured! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 Categories of information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 Taking it to the next level – the “So what?” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

Page 7: PB.ANUT-E

CONTENTS

Types of outcomes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 What are the consequences? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

ChAPTEr 10Evaluating performance measuresCriteria for a good set of performance measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 Characteristics performance measures should possess . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 The realities of performance measurement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 Strategy alternatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 Limitations of performance information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 Overcoming the limitations of performance information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

ChAPTEr 11 Gathering dataKeeping a handle on data collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 Different levels of information for different needs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 Transforming data into information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 Sources of data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 Factors to consider when collecting data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 The data collection process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 Monitoring targets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 Using trained observers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

ChAPTEr 12developing performance information that has real valueReporting and using performance information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 Who should receive the performance information when? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 How can the performance information be used? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 Performance information presentation and reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

ChAPTEr 13 Linking your performance measures to the budget process, performance appraisals, and pay Linking to the budget . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 What affects people’s performance? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 Linking to performance appraisals and pay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75

ChAPTEr 14 using performance measurement information to manageCan performance measurement really help me manage? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 Asking the right questions and doing something with the results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76

ChAPTEr 15Suggestions for improving performance information usefulnessAddress service quality and outcomes explicitly when reviewing services and programs . . . . . . . . . 77

Page 8: PB.ANUT-E

CONTENTS

Ask program managers to set a target for each performance measure and assess progress . . . . . . . . . 77 Include indicators of both “intermediate” outcomes and “end” outcomes . . . . . . . . . . . . . . . . . . . . 77 Include performance measurement in your training programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 Incorporate outcome performance requirements into contracts wherever feasible . . . . . . . . . . . . . . . 78

ChAPTEr 16Is there a bottom line?What is measured and reported gets attention! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 Measuring program outcomes and quality is not easy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 Is there an answer? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79

APPENdIx ASources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .81

APPENdIx B Best Practices in Performance MeasurementModel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82

APPENdIx C Mission, Vision, Values, Goals One-day workshop agenda . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83What facilitation skills do I need? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84

APPENdIx d Surveys Why survey? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87The American Customer Satisfaction Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87What do you want to achieve with your survey? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88Types of surveys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 What resources are required? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91What needs to be considered when writing survey questions? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92Rules for constructing a survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93Survey design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94Pretesting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95

APPENdIx E Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .96

Page 9: PB.ANUT-E

PrEfACE

I have attempted to eliminate much of the glitz and glitter (if you can call it that) surrounding performance measurement and get to the point . That’s why the format of the book is mostly headlines, lists and bullets . My hope is that this information can be used as a practical guideline from a public works perspective . At the same time, I have attempted to provide all of the important tools needed to embark on a performance measurement effort .

As the performance measurement project manager for Snohomish County, WA (24 departments with 2,600 employees), I have personally experienced the full breadth of challenges and rewards that come from such an effort . One thing is for sure, it is no small matter to commit to a performance management and measurement system .

After writing Public Works Performance Management: Creating Accountability and Increasing Performance for APWA in 1999, it was clear that a more detailed “how to” guide would be needed for performance measurement . This book is meant to be that guide . This is a place to start . It should also be a valuable reference manual as you proceed .

I wish you luck as you begin this process . You will need some real staying power and a firm grasp of why you are doing this . Performance measurement is not for the faint of heart .

About the Author

Bill Cook is the Executive Office Administrator for the Snohomish County Executive in Everett, Washington . He is a past member of the APWA Management Practices Advisory Committee, APWA Accreditation Council, and APWA Leadership and Management Committee . He was a self assessment clinic instructor, has assisted many agencies around the county with self assessment, and led the self assessment process for Snohomish County’s public works department . Organizational development is one of his many responsibilities as Executive Office Administrator . In this capacity he has developed and implemented a successful performance management system for the county .

Page 10: PB.ANUT-E

1

INTrOduCTION

“The significant problems we face cannot be solved by the same level of thinking that created them.” - Albert Einstein

Editor’s Note: Superscripted footnote numbers throughout text refer to publications listed in Appendix A - Sources

What is performance measurement?

When most of us think of performance measurement, we think of big formulas and a relentless stream of potentially useless information . For performance measurement to be useful in our daily work and meaningful to our citizens, it cannot be overly complex . Therein lies one of our challenges – deciding what to measure, how to connect it to the real world of our work, and not get lost in volumes of data and paper .

There is no universally accepted term for measuring an organization’s performance . As a result, many terms such as productivity, work measurement, and effectiveness have been used synonymously with “performance measurement .”

Performance measurement in its simplest form is an assessment of how well an agency provides services . From a slightly different perspective, it is the regular gathering of specific information regarding the results of the services we offer . After gathering and analyzing this information, you should be able to answer these questions:

• What are the citizens getting for their money?

• What was achieved?

• How was the quality of the service?

• How were the lives of our citizens improved by this effort?

• How efficiently was the work done?

Managers of any sports team need to know the running score so they can assess whether changes are needed for the team to win . Public works directors, managers, engineers and supervisors need similar information . Defining what a “win” is can be tricky for most government agencies .

For private sector businesses the running score is profit and market share . That’s their main thing . Cost alone doesn’t mean much . Measuring that running score and using it for better performance is what performance measurement is all about .1

So what’s the main thing for a public works agency? Why does the agency exist? What do the citizens expect?

The generic answer for the main thing question is – Public works agencies exist to build and maintain permanent engineering works or improvements with public and private money and provide services that support these works.

Page 11: PB.ANUT-E

INTrOduCTION

2

In Public Works Performance Management, performance measurement was defined as:

per-form-ance meas-ure-ment:

A process of assessing progress toward achieving predetermined goals, including the efficiency with which resources are transformed into goods and services (outputs), the quality of those outputs and outcomes, and the effectiveness in terms of their specific contributions to program goals .

This is an accurate definition, but it is too complex . There are too many elements in the definition to make it meaningful . Creating a definition that is useful and meaningful is a real challenge . The true bottom line from a citizen’s perspective is outcomes – real results . Outcomes address the all important question of “So what?”

If we overlay x miles per year, sweep x miles per month, or process x tons of waste per year, So what? Big deal! What performance measure would answer this question and is it meaningful to all of our employees, elected officials and citizens? Based on this perspective, here’s a simplified definition:

Performance Measurement: An assessment of how an agency performs at providing services .

Why is performance measurement important?

The most powerful reason for measuring performance is that citizens are continually demanding more responsive and competitive government . Our citizens’ expectations are high and the demands continue to grow as their knowledge of the agency grows . We need to be able to respond in a way that is believable, realistic, and informative . We need to be able to prove that we are good stewards of public funds in order to build and sustain trust .

In addition, the Government Accounting Standards Board (GASB) has adopted a resolution strongly encouraging local governments to adopt annual Service Efforts and Accomplishments (SEA) reporting . This type of reporting is a standardized performance report that provides a means for comparing government performance over time and against other jurisdictions . The standards for SEA reporting have been under development for several years and many governmental financial experts anticipate that SEA reporting, or some form of it, may be required in the future .7

There are a number of benefits that come as a result of measuring performance including:

• Strengthened accountability

• Enhanced decision-making

• Improved customer service

• Agencies enabled to determine effective resource use

• Strategic planning and goal setting supported

Page 12: PB.ANUT-E

3

INTrOduCTION

Why measure performance? It is essential because …

• If you don’t measure results, you can’t tell success from failure .

• If you can’t see success, you can’t reward it .

• If you can’t reward success, you’re probably rewarding failure .

• If you can’t see success, you can’t learn from it .

• If you can’t recognize failure, you can’t correct it .

• If you can demonstrate results, you can win public support . - Reinventing Government

What’s the history of performance measurement?

The need to measure government performance was recognized as far back as the early 1900s by New York City’s Bureau of Municipal Research soon after the city adopted a formal budgeting system . In 1943, a guide for measuring performance was written by Clarence Ridley and future Nobel laureate Herbert Simon and published by ICMA . Measuring Municipal Activities: A Survey of Suggested Criteria for Appraising Administration suggested types of information that local governments might use to monitor and assess how well services are being delivered .

It was not long before the federal government turned its attention to performance measurement with the 1949 report to the Commission on Organization of the Executive Branch of the Government, also referred to as the Hoover Commission . This and a second Hoover Commission worked successfully in the years following World War II to streamline and reorganize the federal government, which had grown much larger and more complex in response to the Depression and World War II . These commissions recommended that agencies measure performance and use the information in budgeting, among other functions .

In his work with Japan through the 1950s, “quality guru” W . Edwards Deming pitched statistics as the basic means of finding out what any system can do and then designing improvements, as indicated, to help the system become more productive .

Although discussion of performance measurement has waxed and waned over time, its use has been an essential ingredient in several budget and management reform movements of the modern era, including planning-programming-budgeting systems in the 1960s, zero-based budgeting in the 1970s, and management by objectives in the 1980s .

Research into performance measurement has continued steadily by state and local governments as well as organizations such as the Urban Institute and ICMA . In the mid-1980s, the Governmental Accounting Standards Board (GASB) began a research project into performance measurement that resulted in a series of reports published under the title Service Efforts and Accomplishments Reporting: Its Time Has Come (1989-1993) . GASB concluded that performance measurement and reporting is rapidly developing and is of value to elected officials, citizens, and management . It also recognized that there is need for further work on developing valid and generally accepted indicators, gathering data, and developing methods to verify and present the information .1

In the 1990s, national associations such as GASB, the National Academy of Public Administration, and the American Society for Public Administration passed resolutions calling for the public sector to use

Page 13: PB.ANUT-E

INTrOduCTION

4

performance measurement and reporting systems . Several states have enacted financial performance reporting standards for state agencies . One reason for the push is the 1993 Government Results and Performance Act . This law mandates creation and support of inspectors general and chief financial officers to fight waste in selected federal agencies and to improve accountability for financial and general management . Strategic plans must be set, performance goals established, and an annual report filed with Congress on actual performance as compared with goals . Some federal agencies now must show results before new appropriations are made . All of these changes were made so that government could manage for results, not just cite rules and regulations as a defense against action .

The attention of many performance measurement efforts has turned to outcome-based management . This approach has been developed and utilized by some U .S . cities over the past two decades . In addition, some states are beginning to use it . Foreign countries such as Great Britain, Australia, and New Zealand are well on their way .

Sunnyvale, California, a city of 120,000 in the heart of the Silicon Valley, began the experiment 20 years ago . In each policy area, the city defined sets of “goals,” “community condition indicators,” “objectives,” and “performance indicators .” In a normal political process, most decision-makers never spend much time talking about the results they want from the money they spend . With this system, for the first time they understand what the money is actually buying, and they can say yes or no .

At least a half dozen states hope to follow in Sunnyvale’s footsteps . Oregon has gone the farthest . In the late 1980s, Governor Neil Goldschmidt developed long term goals, with significant citizen input . He set up the Oregon Progress Board, comprised of public and private leaders, to manage the process . The Board developed goals and benchmarks through 12 statewide meetings and written materials from over 200 groups and organizations . “Oregon,” the Board stated, “will have the best chance of achieving an attractive future if Oregonians agree clearly on where we want to go and then join together to accomplish those goals .”

The legislature approved the Board’s recommended 160 benchmarks, measuring how Oregon is performing on three general goals: exceptional individuals, outstanding quality of life, and a diverse, robust economy . Seventeen measures are deemed short-term “lead” benchmarks, related to urgent problems on which the board seeks progress within five years . They include reducing the teen pregnancy rates, enrolling people in vocational programs, expanding access to basic health care, and cutting worker compensation costs .

Another 13 benchmarks are listed as “key”— fundamental, enduring measures of Oregon’s vitality and health . These include improving basic student skills, reducing the crime rate, and raising Oregon’s per capita income as a percentage of the U .S . average .

Governor Barbara Roberts translated the broad goals and benchmarks into specific objectives for each agency . Objectives were integrated into the budget, giving Oregon the first performance-based budget among the states .

Government agencies, professional associations, think tanks such as the Urban Institute, and researchers in universities have suggested many standards of performance measurement . In an era when “do more with less” has become a common adage directed at all levels of government, performance measurement is becoming an essential tool for addressing questions of improvement in terms of outcomes, results, and accountability .

Page 14: PB.ANUT-E

5

INTrOduCTION

What does it take to develop and implement a good performance measurement system?

As reported in Serving the American Public: Best Practices in Performance Measurement by the National Performance Review, the following findings were made to help government agencies develop and implement a performance measurement system:

Leadership is critical in designing and deploying effective performance measurement and management systems.

Clear, consistent, and visible involvement by senior executives and managers is a necessary part of successful performance measurement and management systems . Senior leadership should be actively involved in both the creation and implementation of its organization’s systems . In several public and private organizations studied, the chief executive officer not only personally articulated the mission, vision, and goals to various levels within the organization, but was also involved in the dissemination of both performance expectations and results throughout the organization .

A conceptual framework is needed for the performance measurement and management system.

Every organization needs a clear and cohesive performance measurement framework that is understood by all levels of the organization and that supports objectives and the collection of results . Some organizations use a balanced set of measures and methodology to organize measures and align them with their overall organizational goals and objectives . Most have a uniform and well-understood structure setting forth how the process works and a clear calendar of events for what is expected from each organizational level and when .

Effective internal and external communications are the keys to successful performance measurement.

Effective communication with employees, process owners, customers, and stakeholders is vital to the successful development and deployment of performance measurement and management systems . It is the customers and stakeholders of an organization who will ultimately judge how well it has achieved its goals and objectives . And it is those within the organization entrusted with and expected to achieve performance goals and targets who must clearly understand how success is defined and what their role is in achieving that success . Both organization outsiders and insiders need to be part of the development and deployment of performance measurement systems .

Accountability for results must be clearly assigned and well understood.

High performance organizations clearly identify what it takes to determine success and make sure that all managers and employees understand their responsibilities in achieving organizational goals .

Performance measurement systems must provide intelligence for decision-makers, not just compile data.

Performance measures should be limited to those that relate to strategic goals and objectives, and that provide timely, relevant, and concise information for use by decision-makers – at all levels – to assess progress toward achieving predetermined goals . These measures should produce information on the efficiency with which resources are transformed into goods and services, on how well results compare to

Page 15: PB.ANUT-E

INTrOduCTION

6

a program’s intended purpose, and on the effectiveness of organizational activities and operations in terms of their specific contributions to program objectives . Collecting data simply because the data is available to be collected, or because having large amounts of data looks good, doesn’t justify the effort . Organizations should choose performance measures that can help describe organizational performance, direction, and accomplishments, and then aggressively use these to improve products and services for customers and stakeholders .

Compensation, rewards, and recognition can be linked to performance measurements.

Very few organizations actually link performance evaluations and rewards to specific measures of success . The desire is to tie financial and non-financial incentives directly to performance . Such a linkage sends a clear and unambiguous message to the organization as to what’s important .

Performance measurement systems should be positive, not punitive.

The most successful performance measurement systems are not “gotcha” systems, but learning systems that help the organization identify what works – and what does not – so as to continue with and improve on what is working and repair or replace what is not working . Performance measurement is a tool that lets the organization track progress and direction toward strategic goals and objectives .

Results and progress toward program commitments should be openly shared with employees, customers, and stakeholders.

Performance measurement system information should be openly and widely shared with the organization’s employees, customers, stakeholders, vendors, and suppliers . Real time access can be maintained on the organization’s Internet and intranet sites for access by various levels of management, teams, and sometimes individuals . Most organizations use periodic reports, newsletters, electronic broadcasts, or other visual media to set forth their objectives and accomplishments .

This not an end, but a beginning …

Effective performance measurement systems take time: time to design, time to implement, time to perfect . Performance measurement must be approached as an iterative process in which continuous improvement is a critical and constant objective .

Successful organizations use benchmarking to establish performance targets as part of a continuous improvement process . This can include process mapping and comparing these practices with other organizations considered to be the best .

The performance measurement process model developed through this study is provided as Appendix B.

Page 16: PB.ANUT-E

7

hOW dO WE KNOW If WE ArE rEAdy fOr PErfOrMANCE MEASurEMENT? 1

“If a diamond needs to be cut and polished, it should be prepared to take the friction that goes with the process.” - Greg Mashapa

You wouldn’t be reading this book if you didn’t have some sense of your agency’s readiness for performance measurement . At least you have an interest in knowing more . If you want to assess how ready you are to take this on, this chapter will be helpful . If you know you are ready (or have been commanded to “engage”), you may want to move on to Chapter 2 .

doing a readiness survey

Before you plunge into a performance measurement process, there are a few things you should think about . It is very important for you to step back and survey the “lay of the land” before proceeding . Without doing this, the probability of creating another ‘flavor of the month’ is high . There are four possible steps in doing a readiness survey .

Step One: Answer the tough questionsStep Two: Determine your basic level of readinessStep Three: Checklist your current systemStep four: Discuss it as a group

Step One – Answer the tough questions

You may not have immediate answers to these questions, but it is important to begin thinking about them . Give it a try . Take a minute and write down your thoughts to these questions:

1 Is our agency already engaged in some sort of performance measurement process? What other initiatives are underway or are on the drawing board?

2 Why are we interested in a performance measurement system? What’s our motivation?3 What are we signing up for with performance measurement? How much effort is required? 4 What are the real benefits of implementing a performance measurement system?5 How do we get buy-in for the performance measurement process? Buy-in from whom?6 How will our agency’s culture impact this effort? 7 How do we handle those who don’t want to participate? How do we overcome negative reactions from our staff?8 Do we have the resources needed to do performance measurement? 9 How do we organize ourselves to make this process a success?

Page 17: PB.ANUT-E

hOW dO WE KNOW If WE ArE rEAdy fOr PErfOrMANCE MEASurEMENT?

8

1

To assist you in considering the full array of answers for each question, here are some possible answers . Highlight those that apply to your agency .

1 Is our agency already engaged in some sort of performance measurement process? What other initiatives are underway or are on the drawing board?

Mapping out all of the initiatives that are currently underway or are on the drawing board can be revealing – and scary! List the initiative, timeline and who’s involved on one page (if you need more than one page, your agency is a good candidate for initiative burnout) . This will help put the performance measurement process into perspective and suggest where you may find some challenges . Do any (or all) of these labels sound familiar?

• Accountability• Balanced Scorecard• Baldrige National Quality Program• Benchmarking• Best Practices Review• Canada Awards for Excellence• Governmental Accounting Standards Board (GASB) Statement No . 34• Managing for Results• Management Practices Self Assessment and Accreditation• Performance Based Budgeting• Service Efforts and Accomplishment Reporting (SEA)• Strategic Planning• Total Quality Management

2 Why are we interested in a performance measurement system? What’s our motivation?

• Someone thought it was a good idea .• Someone went to a seminar and wants a promotion .• We are required by management to do it .• We need to create more accountability .• We need to show that we are good stewards of public resources .• We need to find ways to be more efficient .• We want to find savings .• We want or need more recognition .• To find some tools to help us improve our services .• Other agencies are doing it; we can, too .• Our peers claim that this is what a well run agency does . Let’s show them we are one .• To bring us out of the ice age .• We need a fresh coat of paint .• We need a compass or a guide to help us move to where we want to be .

3 What are we signing up for with performance measurement? How much effort is required?

• This is a big project .• It’s another flavor of the month .• It’s going to take too many resources and too much time .• This will be a lot of work, but is a necessary part of public service in today’s world .• This is a nice addition to our current management information system .

Page 18: PB.ANUT-E

9

hOW dO WE KNOW If WE ArE rEAdy fOr PErfOrMANCE MEASurEMENT? 1

4 What are the real benefits of implementing a performance measurement system?

• Save money .• Improve efficiency .• Improve effectiveness .• Will make all of us look good .• Helps clarify budget needs .• Helps identify operational and management needs .• Promotes teamwork and staff development .• Encourages interdepartmental coordination .• Identifies duplication and wasted effort .• Defines real outcomes that answer the “so what” question .• Promotes public awareness .• Creates a dialogue with our citizens and reports our progress on issues of importance .• Improves communication .

5 How do we get buy-in for the performance measurement process? Buy-in from whom?

• We do/don’t have buy-in from management – administration, directors, managers, supervisors .• We do/don’t have buy-in from our employees .• We do/don’t have buy-in from our unions .• We do/don’t have buy-in from our council .• We need buy-in from everyone to make it work .• Buy-in includes allowing those involved in the process to take the time needed to do the work – we

do/don’t have it .• Buy-in includes moral support and encouragement – we do/don’t have it .• Buy-in includes a strong commitment to the process – we do/don’t have it .

6 How will our agency’s culture impact this effort?

• There won’t be any impact from our culture .• Some employees may fear that this is the first step in a process that will lead to tougher work

standards and a forced speedup of work processes or even layoffs .• People will say that we can’t measure what they do .• Some will say we are measuring the wrong thing .• Everyone will embrace this as an opportunity to improve our services .

7 How do we handle those who don’t want to participate? How do we overcome negative reactions from our staff?

• Sell the benefits of the process .• We don’t need to overcome them .• The worth of the process will become evident as we progress .• Storming is natural, don’t worry about them .• Some people don’t like any change - they will be uncomfortable no matter what we do .• Some negative is okay, we should encourage diversity of thought .• The light goes on at different times for different people, just work with them .

Page 19: PB.ANUT-E

hOW dO WE KNOW If WE ArE rEAdy fOr PErfOrMANCE MEASurEMENT?

10

1

8 Do we have the resources needed to do performance measurement?

• Resources are hard to come by – they include knowledge, understanding, management support, information systems, a project manager, staff time, etc .

• Managers and supervisors who already feel that their resources are stretched too thin will be reluctant to tackle new, time-consuming measurement reporting tasks .

• We need to carve out the time for this – something else will need to come off the plate or be delayed .• We are going to need to ask for more resources in order to make this happen .

9 How do we organize ourselves to make this process a success?

• Assign a project manager .• Each division needs to assign someone with this responsibility .• Create a “tiger team” to get everything organized and put together .• Work plans should be adjusted to include this new work .• The people working on this should do it in addition to their normal work .• A new resource should be added to handle all this new work .

Step Two – determine your basic level of readiness

Next, you can determine your basic level of readiness by placing a checkmark below on those characteristics that best describe your agency . There are three basic levels of readiness: “have a lot of work ahead,” “getting ready,” and “ready to take it on .”

“have a Lot of Work Ahead”

• We are already on overload from organization-wide initiatives .• There is a lack of interest in performance measurement .• Performance measurement is a once-a-year event .• There is little or no community dialogue regarding what the community wants and needs .• Our top management doesn’t really support new or innovative processes .• There’s a status quo attitude – don’t rock the boat!

“Getting ready”

• There is an interest in engaging the community in a dialogue on performance and accountability .• Top management has recognized the need for performance measurement .• The automated systems of the organization provide lots of options for information feedback .• There is energy around really having the agency’s act together and being recognized for excellence .

“ready to Take It On”

• We have some experience with performance measurement .• We have a strong project manager with excellent people, communication, and technical skills .• We are preparing for or are engaged in a management practices self assessment .• There is ongoing community dialogue – and it has been happening for some time .• There is strong support from our agency management, the organization’s administration, and the

legislative body .

Page 20: PB.ANUT-E

11

hOW dO WE KNOW If WE ArE rEAdy fOr PErfOrMANCE MEASurEMENT? 1

• We have a high degree of employee involvement in general .• There is a desire to be accountable as an agency .• We have made a commitment to ongoing improvement .

Step Three – Checklist your current system10

How does your current performance measurement system stack up? Do a little performance measurement of your own; assess it against this checklist .

• Our system starts with big-picture goals – and everyone knows what they are.

Any manager can set performance goals for work groups or individuals, but those immediate targets will always seem arbitrary unless people understand how they tie into the big picture . Does your agency have a clear, well-understood mission and goals? Are they widely publicized?

• Every work group has performance drivers linked to the big-picture goals.

How do people’s jobs tie to the overall goals? Departments and work groups within departments need specific objectives that somehow tie to the big-picture goals .

• Individual and work group performance objectives are clear – and reasonable.

Targets for performance can be set with any measure that makes sense . In some cases, job-specific goals – meetings held with customers, calls answered by the second ring – can be helpful, so long as the individuals involved are also part of a group with larger objectives . Even people whose work isn’t easily quantifiable should be held accountable for performance objectives . If the performance targets aren’t reasonable or are perceived as unfair, of course employees will dismiss them . On the other hand, you may want to develop stretch goals for the quarter or the year in case the work group blows past the “reasonable” targets .

• We educate and coach employees continually.

It happens often: agencies launch a performance measurement system with plenty of hoopla and plenty of training . A year later, employees have forgotten what most of the measurements mean – and new employees are utterly mystified . The important task of communicating the connection between work-group objectives and big-picture goals hasn’t continued .

• We have a clear and simple system that tracks progress toward our goals.

Most agencies have well-developed systems for tracking financial performance, but not many have equally sophisticated methods of tracking progress on the big-picture goals . What is tracked must be communicated in such a way that employees see and can follow the results . One way to track and share this information is through the use of “scoreboards” that are posted on office walls or on employees’ computer screens .

Page 21: PB.ANUT-E

hOW dO WE KNOW If WE ArE rEAdy fOr PErfOrMANCE MEASurEMENT?

12

1

Step four – discuss it as a group

There are many potential approaches for discussing the merits of a performance measurement system . Those who will be involved in the discussion also needs to be determined . At a minimum, the management team should have a discussion prior to a great deal of time and effort going into the design of the system . “Management team” in this context refers to the public works director and all those managers and/or supervisors who report directly to the director .

The discussion approaches range from adding the topic to a regular management meeting agenda, to holding a workshop or retreat, to discussing the topic in the broader context of “where we are headed?”

One effective approach is a group exercise called “Readiness Assessment .” The purpose of the exercise is to identify where the agency currently is on the topic and where the group is willing to commit to move the agency in the future . Then there is a discussion regarding what it will take to go from where the agency is are today to where it wants to be . This becomes the action plan . The group can be expanded beyond the management team as desired . Multiple exercises can be held with different groups if desired, e .g . each division of the agency, and the results are combined .

A 5x5 grid (drawn on a flipchart) is used to explain the exercise and gather the input from each participant . The x-axis assesses the readiness to do performance measurement . The y-axis assesses the need .

5

4

3

2

1

0 1 2 3 4 5 rEAdINESS

The exercise includes the following steps:

• The Readiness Assessment grid is placed at the front of the room on a flipchart .

• The public works director provides the background and perspective for why this assessment is being done .

• The facilitator explains the purpose of the exercise, the process, and how the information will be used .

NEEd

Page 22: PB.ANUT-E

13

hOW dO WE KNOW If WE ArE rEAdy fOr PErfOrMANCE MEASurEMENT? 1

• If needed, the group can have a facilitated discussion about “What the culture of the agency looks and feels like” and/or “What resources are available for this process? What capacity do we have?”

• Group members are then given the opportunity to ask clarifying questions, raise concerns, etc .• The questions and concerns should be addressed by the public works director (or the senior manager present) .

• Participants are given two dots (different colors, e .g . red, blue) that will represent their opinions on where the agency is currently and at what level they are willing to commit to move the agency . For example, if a participant felt that currently the agency rates a 2 for readiness and a 3 on need, the red dot would be placed at the intersection of 2 on the x-axis and 3 on the y-axis . The blue dot is placed likewise for where the employee is willing to commit to help the agency move .

• All participants come forward at the same time to post their votes .

• The facilitator reviews the results with the group and determines where the center points are for each color – the current, and the future .

• The group discusses what it will take to move from where they are now to where they want to be . The facilitator records the comments on a flipchart .

• The group further discusses what the next steps need to be . Assignments are made – who does what by when? (if appropriate) .

• The facilitator captures the key points of the discussion, the action plan, etc ., in a concise report . The report is distributed to all participants . Other facilitation guidelines are described in Chapter 2 .

Now What?

After going through these four steps, you should have a very good idea whether or not, and how, you should proceed . The readiness survey helps ensure that you go into the performance measurement process with your eyes wide open .

Check out these websites

If you feel ready to see what other agencies are doing, you may want to check out these websites for starters:

Cities• Charlotte, NC

www.ci.charlotte.nc.us

• Portland, OR www.ci.portland.or.us

• Sunnyvale, CA www.ci.sunnyvale.ca.us

Counties• Fairfax Co ., VA

www.co.fairfax.va.us

• Multnomah Co ., OR www.multnomah.lib.or.us

• Prince William Co ., VA www.co.prince-william.va.us

States• Oregon

www.state.or.us

• Utah www.state.ut.us

• Virginia www.state.va.us

Page 23: PB.ANUT-E

hOW dO WE KNOW If WE ArE rEAdy fOr PErfOrMANCE MEASurEMENT?

14

1

Federal• FinanceNet

www.financenet.gov

• General Accounting Office www.gao.gov

• National Performance Review www.npr.gov

• Office of Management and Budget www.whitehouse.gov/omb

• US State and Local Gateway www.statelocal.gov

Organizations• Baldrige National Quality Program

www.quality.nist.gov

• Canada Awards for Excellence Program www.nqu.ca

• GASB Statement No . 34 www.rutgers.edu/accounting/raw/gasb

• International City/County Management Association www.icma.org

Partners with APWA• All of the following partners are linked at

the APWA website – www.apwa.net

• 3M, Traffic Control Materials Division

• American Consulting Engineers Council (ACEC)

• American Water Works Association (AWWA)

• Asociacion De Municipios De Mexico, A .C . (AMMAC)

• Associated General Contractors (AGC)

• Civil Engineering Research Foundation (CERF)

• Equipment Maintenance Council (EMC)

• Federal Highway Administration (FHWA)

• Federal Laboratory Consortium

• Infolink

• Institute of Public Works Engineering Australia (IPWEA)

• Institute of Transportation Engineers (ITE)

• National Association of Transportation Technology Transfer Centers (NATTTC)

• National Council for Public-Private Partnerships (NCPPP)

• Professional Grounds Management Society (PGMS)

• Solid Waste Association of North America (SWANA)

• US Army Corps of Engineers (USACE)

• Urban and Regional Information Systems Association (URISA)

• Water Environment Federation (WEF)

Page 24: PB.ANUT-E

15

WhErE ArE WE hEAdEd? 2

“You’ve got to think about “big things” while doing small things, so that all the small things go in the right direction.” - Alvin Toffler

Checking in with your mission, vision and values

Does your organization have clear direction, plans that align, and employees who understand how they fit into the big picture? This is a challenge that faces every organization of every type in all industries . As government agencies, we have particular challenges and opportunities in this regard . The expectations of our citizens are ever changing, increasing and the available resources must be stretched further and further . Is it possible to have clarity on direction? Should we expect our employees to align with the course that is set? Just imagine …

• working in an agency where every member, from top management to the newly hired employee, shares an understanding of the agency’s goals and purpose .

• participating in a work group where everyone knows how he or she contributes to the agency’s success .

• being on a team whose every member can clearly state the needs of the agency’s customers and how the team contributes to satisfying those needs .1

How would you characterize your agency from the “knowing where it’s headed” perspective? Do any of these statements capture the level of interest and action? • We know where we are headed, have aligned ourselves with our community’s vision, and report our

progress internally and externally quarterly .

• We have everything aligned and linked together – everyone is on the same page, and they hold themselves accountable .

• We have a mission, vision and values statement that was developed by top level management, but they are just words on the wall .

• There are a few people at the top who know where we are headed, but it’s a secret to everyone else .

• We don’t know where we are headed, and everyone seems to feel it would be very difficult to describe in just a few words .

• No one really cares about this stuff inside our organization except the elected officials .

There are many approaches to defining your direction and aligning your resources . There is a right answer for your agency . Your challenge is to select the approach that best fits your situation and circumstances .

It is becoming harder and harder to not be proactive in this area . We must communicate effectively with our citizens and employees or face the consequences – citizen revolt, low morale, etc .

Page 25: PB.ANUT-E

WhErE ArE WE hEAdEd?

16

2

Guidance from the Public Works Management Practices Manual

For the public works industry the Public Works Management Practices Manual establishes a set of benchmarks and defines some important terms . This manual was first developed in 1991 through the APWA Research Foundation with the assistance of APWA Institutes and Councils for Professional Development and the Management Practices Advisory Committee . It includes more than 450 public works management practices that detail uniform criteria and procedures to perform all public works services . The manual has been updated twice since 1991 – in October 1995 and again in August 1998 .

In the first chapter of the manual, entitled Organization, the first management practice pertains to mission, vision, value statements as follows:

1 .1 Mission, Vision, Value Statements

Statements are developed which define the agency’s mission, vision and values. These statements are approved by the legislative or administrative body overseeing the agency and reviewed periodically.

The agency’s mission statement is a concise description of the fundamental purpose for which the agency exists . This statement answers the questions of why the agency exists and who the agency is serving . Mission statements for fundamental areas (streets, water supply, solid waste management, etc .) also may be developed .

The agency’s vision statement describes the vision of the agency’s leadership . The role of a leader is to create a vision and set a course for moving toward that dream . Leaders convert dreams into reality . This statement answers the questions of what the leadership of the agency wants to create and where the agency is going .

The agency’s values statement establishes the core values which will assist in fulfilling the mission of the agency . This statement answers the questions of what culture the leadership of the agency wants to create and how all agency employees are to act . The values are tangible behaviors that define how each employee is expected to act .

Why is strategic direction important?

It is true that if you don’t know where you are headed, you won’t know when you get there . If I don’t know why my organization exists (its mission), how can I know if I am contributing? It is the responsibility of leaders in all organizations to set the direction (the vision) by which business plans and work plans are developed, resources are allocated, and performance is judged . And if there are no clear expectations of how our employees are to treat customers and other employees, then their own value systems prevail . When management fails to provide clear direction, employees invent their own .

Mission, vision, value statements are a foundation, an anchor, a guidepost for all to rally around . For example, Abraham Lincoln “preached a vision” of America that continues to serve as a powerful foundation . He provided a clear, succinct, and timely message that united people behind a common cause . Lincoln created opportunities to convey his message in simple, reverent, and patriotic terms . His vision communicated integrity, values, and high ideals that appealed to all people .

Page 26: PB.ANUT-E

17

WhErE ArE WE hEAdEd? 2

“One country, one constitution, one destiny.” (Speech, March 15, 1837)

“Liberty and union, now and forever, one and inseparable.” (Speech, January 26, 1830)

“That this nation, under God, shall have a new birth of freedom, and that government of the people, by the people, for the people, shall not perish from the earth.” (Speech at Gettysburg, November 19, 1863)

Lincoln’s messages renewed the spirit of Americans . By clearly sharing his vision and then gaining acceptance and commitment, he created a battalion of energy within each person .

Such is our challenge. True leaders at all levels have vision.

• Vision to see what’s possible• Vision to invent a future• Vision to clarify and communicate a direction• Vision to set a course and lead people there

real mission, vision, values definitions and examples

Defining the terms (mission, vision, values) is crucial when attempting to create or update these statements . The following definitions and examples are provided to assist you .

There are seven types or levels to be considered:

Type / Level ExampleInstitution Constitution of the United StatesOrganization City or County of XDepartment/Agency Public WorksDivision Road MaintenanceProgram Street SweepingFamily the Smith FamilyPersonal Jim Smith

Mission Statements

Mission statements answer these two questions: Why do we exist? Who do we serve? A good mission statement has the following qualities:

• Is timeless - written as if it will never change• Is short enough to remember• Reflects the needs and goals of the people who create it• Does not include goals - goals change• Becomes the creed or constitution• Is more than value statements

Page 27: PB.ANUT-E

WhErE ArE WE hEAdEd?

18

2

Mission Statement Examples

• Government Agency - Enhancing the quality of life for present and future generations .• Nonprofit Agency - Uniting people and resources to build healthy communities .• Public Works Department – Provide, control, and maintain public works facilities .• Road Maintenance Division - To maintain all city roadways .• Street Sweeping Program – To keep our streets clean . • Family - To love each other . To help each other . To believe in each other . To have fun together . To

wisely use our time, talents, and resources . • Personal - My mission is to give, for giving is what I do best . I will seek to learn, for learning is the

key to living . I want to teach my children and others to love and laugh, to learn and grow beyond their current bounds . I will build personal, business, and civic relationships by giving frequently in little ways .

Vision Statements

Vision is your picture of the possible future you want to create for your organization, department, program, family, or yourself . If you are a person of vision, you need to do more than stay busy, you need to be busy with actions that move you toward your life and work purposes .

Vision statements answer these two questions: What do we want to create? Where are we going?

Vision Statement Examples

• Government Agency - We desire to create a safe, secure and healthy community, a sustainable community, and a model public organization .

• Nonprofit Agency – Each person’s wellness, health and safety needs are met through caring, giving communities .

• Public Works Department - We will create transportation, water, and waste systems that will meet the needs of our children .

• Road Maintenance Division – We will maintain our roadways at the quality level desired by our citizens .

• Street Sweeping Program – We will meet or exceed the street cleanliness of our citizens .• Family – Each member of our family will achieve their greatest potential and be happy .• Personal - My personal vision or dream is to live happily with my family, to be respected in my field

of work, and to live a principle-centered life .

Value Statements

Values are our beliefs that underlie our thoughts, words, feelings, and actions . Our values define the way we act . They reflect our cultural background, personal discoveries, and family scripts . And they are the glasses through which we look at the world . We evaluate, assign priorities, judge, and behave based on how we see life through these glasses .

Values answer these two questions: What culture do we want? How do we want everyone to act?There are many values from which to choose . A partial list includes:

Page 28: PB.ANUT-E

19

WhErE ArE WE hEAdEd? 2

Accountability Generosity PersistenceAlignment Helpfulness PunctualityAssertiveness Honesty PurposeAttitude Honor ReliabilityCaring Humility RespectCommitment Humor ResponsibilityCompassion Imagination Results-OrientedConfidence Individuality Risk-TakingCooperation Innovative SafetyCourage Integrity Self-DisciplineCourtesy Justice Self-RelianceCreativity Kindness ServiceDependability Knowledge SynergyDetermination Leadership TactDiscipline Love TeachableEnergetic Loyalty ToleranceEnthusiasm Moderation TrustEquity Optimism TrustworthyEsteem Participation UnderstandingExcellence Partnership UnityFairness Passion VisionFlexibility Patience WisdomFriendliness Perseverance Worth

It is important to define each of the terms selected so everyone gains a better understanding of what is expected.

Value Statement Examples

• Government Agency - In fulfilling our mission, we commit to the following core values: partnership, accountability, respect, integrity, service .

• Non-profit Agency – We commit to create a healing environment that role models our values in everything we do: respect, integrity, service, excellence .

• Public Works Department – We commit to the following values: respect – everyone’s input will be acknowledged, respected and valued; accountability – everyone has a clear understanding of the direction and goals to be accomplished

and is personally and collectively responsible for the outcome; integrity – foster an environment which inspires fair and honest treatment for all; creativity – establish an environment that promotes and inspires creative and innovative ideas and

actions .• Road Maintenance Division – same as public works department .• Street Sweeping Program – same as public works department .

Page 29: PB.ANUT-E

WhErE ArE WE hEAdEd?

20

2

• Family – We commit to treat each other with respect, to value individual worth, to have unity in our home, and an atmosphere of love .

• Personal - My values (how I want to act): Integrity - being true to my values and remembering what is really important Trust - being trustworthy and openly trusting others Respect - respecting all people in every way and striving to be respected by others

Service - providing quality services to my clients, establishing positive working relationships, and serving those in need

Two basic approaches

There are many possible approaches to defining your mission, vision, values statement . Two approaches will be presented that represent the extremes regarding involvement – a whole system approach or a group approach (something less than the whole system, e .g ., the management team or several representatives of the agency) .

The group approach may be necessary if there’s not time to get everyone involved or if the situation requires leadership (action from the top is needed) to move forward .

The whole system approach

When almost everyone in the whole system is involved, employees of the public works agency in this case, the general term used is a large group intervention . The large group or whole system methods deliberately involve a critical mass of the people affected by change, both inside the agency (employees and management) and outside it (citizens, customers, suppliers) .

This type of process is based on the principle that involvement leads to commitment . When you involve a critical mass from the whole system, there is a lot of involvement, and when people contribute to a process they tend to be committed to the outcome – they have ownership .

There are many different names for these methods including whole-system change, large-scale organizational change, the Conference Model, Future Search, Simu-Real, and Real Time Strategic Change . New names and approaches are being introduced constantly .

This whole-system change process allows a critical mass of people to participate in:

• Understanding the need for change• Analyzing the current reality and deciding what needs to change• Generating ideas about how to change existing processes• Implementing and supporting change and making it work

People will support what they help to create . When everyone is involved in the decision process, carrying it out happens faster and with less resistance . And with the whole system involved there is often a more creative solution than a small group can produce .

The major downside to this approach is that so many people are involved . There is a major commitment of time involved . The typical format is one to three days in a workshop or retreat setting with follow-up sessions as appropriate . It takes skilled facilitators to design and run a large group process which may be expensive if you do not have that expertise within your agency .11

Page 30: PB.ANUT-E

21

WhErE ArE WE hEAdEd? 2

The City of Sacramento Public Works Department used the whole system approach to develop its mission, vision, values and goals statement and to identify and make desired improvements . Their process is a good example of how to use this approach successfully .

The department serves a population of 400,000 in an area totaling 98 square miles with 800 employees . The services provided include maintenance services, technical services, solid waste, on-street parking, and animal care . Public Works Director Mike Kashiwagi saw the need to make some changes and improvements but wanted to tap into the knowledge and experience of his employees . That was the starting point .

The primary drivers for change included: improving competitiveness and cost effectiveness – the need to demonstrate value to customers, to retain the privilege to serve; changing customer’s perception that we were doing something to them versus for them; and delivering services based on the needs of the customer versus convenience . To achieve these outcomes required fundamental changes in work systems and work behaviors . The compelling reasons to change included: the need to create an organization which understands, values, and appreciates the needs of our customers and ourselves; the need to create an organization which believes we are not just another governmental entity, but a business which must continually improve in order to stay in business; and the need to create an organization which continually identifies system and organizational improvements which supports quality service and customer satisfaction at a competitive cost versus trying to hold onto and preserve the status quo .

The management team selected an approach called “real time strategic change” because it is based on values and represents a process that was compatible with two principles they wanted to drive their change effort – shared leadership and meaningfully involving people . This approach utilizes and leverages the knowledge, talents, creativity, and experiences of everyone in the agency . The outcomes result in improvements because the initiatives are introduced and driven by people most knowledgeable of operations . It supports the message that success cannot be the responsibility of a select few – everyone is responsible and accountable for the success of the agency . This approach is designed to achieve rapid and sustainable changes in the agency by involving all employees of the agency at one time in a large-scale meeting .

Prior to involving everyone in the agency they undertook some leadership development to build a stronger, more cohesive leadership team . In this process they built a shared understanding of: what business they’re in (mission); where they wanted to be (vision); key focus areas (goals); and for what they stood (values) .

Then to get a better sense of the current situation and to determine what their customer’s perceptions were the leadership team began a strategic planning process by gathering data from key customers . They interviewed 75 key customers – the business community, neighborhood groups, city officials, and other city departments – and asked: What is your first impression when you think of public works? What are the current or future issues facing the city? What are we doing right/wrong? What suggestions do you have for improvement?

Based on the customer feedback and their own knowledge, the leadership team gained a shared understanding of why they needed to change and what they needed to do differently . The leadership team next explored ways to bring employees into the planning process . Many options were considered – top down, bottom up, representative cross section, etc .

Page 31: PB.ANUT-E

WhErE ArE WE hEAdEd?

22

2

The criteria for determining how to proceed with an agency-wide effort included:

• Choosing options that achieve their purpose . The focus was on the purpose and reasons why a change was needed . The purpose was the primary guide as they planned all aspects of how to involve everyone in the agency .

• Walking the talk . They wanted to begin “walking the talk” of the culture they wanted to create . That meant creating a more inclusive culture – one that was participatory and involved everyone at all levels .

• Building support capacity . The structures and systems needed to support the improvement effort was determined . For example, they looked for ways to support a collaborative environment through team activities . Did they have adequate training? Were there appropriate meeting facilities and staff support for these types of activities?

• Logistical constraints . To bring together all 800 employees at one time would mean shutting down the entire operation, which of course was not feasible . Creative thinking and a real look at all of the constraints was necessary .

• Everything you do sends a message . They realized that during times of change people are hypersensitive and look for underlying meaning in all decisions . A consistent message had to be sent to all employees about the improvement effort .

A design team was organized that had representation from the whole agency . The purpose of the design team was to design a process so that everyone’s voice would be heard in creating the future of public works . Face-to-face meetings with employees to explain the need for change, the process for engaging everyone, and soliciting nominations for employees to serve on the design team were held throughout the agency . The 23 member design team was selected from 150 nominations and included a cross section of the agency in terms of ethnic and gender diversity, age, job classification and level, work site location, a balance between field operations and technical staff, all divisions represented, length of service, and an individual’s willingness to serve . After the design team selected and chartered, the team made a presentation to the city’s leadership requesting support and, in some cases, participation . The design team also took on the responsibility of communicating with all employees as the process progressed .

Then additional training was conducted for all managers and supervisors . The training included communication, facilitation and coaching skills so they would be prepared to help employees as the agency transitioned to a more customer-focused and business-oriented organization .

The design teams work culminated in a series of back-to-back, two-day meetings where all employees came to develop a shared vision and ways to improve the agency (four meetings with 200 employees each) . The meetings were called “Workout ‘96” with the theme of “strengthening our future / building our future .” From each of the four groups 25% were selected to attend a final two-day convergence meeting where all suggestions were forged into a written plan of action . The top initiatives were identified that needed to occur to achieve their preferred future . A group of 20 employees volunteered to work with the leadership team to determine how to prioritize and move forward on the initiatives .

The top five initiatives were identified along with recommendations for improvement: navigating the flight plan (a team responsible for coordinating the remaining four initiatives); training design, external communications, employee performance evaluation, challenge privatization (staying competitive) . The agency’s mission, vision, values and goals were another result of this effort .

In May 1998, follow-up sessions were held called “Update ’98 – Imagine… Building Our Future.” At these sessions (again four meetings with 200 employees each) there was an update of what had happened since Workout ’96, an opportunity to generate solutions on how to make things better, and a chance to

Page 32: PB.ANUT-E

23

WhErE ArE WE hEAdEd? 2

reconnect with others in the divisions and departments to discuss common issues . The agency’s “hot issues” were prioritized and a plan developed to address them and reach the preferred future .

Appendix C contains the details of their efforts including mission, vision, values, goals, process, and agendas .

developing or updating your mission, vision, values statement in one fine day

If you haven’t developed a mission, vision, value statement and desire to or if your current statement needs to be updated, it is possible to do so without great expense . Some organizations have taken years and gone to great expense to develop this statement . It is true that the process in developing the statement is very important . However, it does not need to take years, months, or even weeks . This group approach is intended to be accomplished in a short time frame .

The following format and agenda have been used (with variations) to develop many mission, vision, value statements . The intent is to complete the first draft in a one-day workshop . The one-day format has several facilitation models, concepts, and tools incorporated including: design teams, environmental scanning, appreciative inquiry, brainstorming, problem solving, consensus building, and full participation . This structure is designed to honor everyone’s thoughts, encourage participation, enable hands-on experience, create sustainable agreements, and build teamwork .

The workshop is designed to revisit, refresh or create mission, vision, and value statements as defined by public works management practice 1 .1 in the Public Works Management Practices Manual .

Prior to the WorkshopA design team meets to detail the workshop including identifying the desired outcomes, location, attendees, etc . The agenda and any “pre-think” pieces are sent to participants one-week prior to the workshop . The workshop is designed for up to 40 participants .

After the WorkshopWithin a few days of completion of the workshop, someone needs to prepare a written summary of the results (some facilitators provide this service) . This report highlights the key points of the discussion and details the results and action plan . The completed report should be distributed to all participants . You may desire all public works employees to have an opportunity to review and comment on the proposed statement . This can occur by distributing the report and providing a means for feedback or holding discussion sessions . It is important that all employees have an opportunity to provide input before it is finalized . Once final, the statement can be shared with elected officials and administrators, and incorporated into letterhead, posters, signs, advertising, etc ., as appropriate .

Appendix C contains a sample one-day workshop agenda for developing a mission, vision, values statement .

do we need a facilitator?

To facilitate is “to make easy or more convenient .” The facilitator guides a group toward specific outcomes .

Have you ever led a meeting and found it difficult or even impossible to both participate and lead the discussion effectively? You could have benefited from involving a facilitator . There are a number of situations when you might need a facilitator . Here are a few more to consider .

Page 33: PB.ANUT-E

WhErE ArE WE hEAdEd?

24

2

• There’s a lot to be accomplished in a short period of time, and you need someone to keep the discussion on track and moving .

• You need an objective, outside resource to provide feedback on the process and dynamics .• You want to use a structured process for exploring new options, creating new direction, or making

decisions .

Can you lead and facilitate at the same time? Sure you can! We do it every day . And it’s a skill that we develop through experience . At times it’s also very difficult to do both . Sometimes you can anticipate a need to designate someone else to facilitate . There are other times in the midst of a meeting when you need to be clear about which hat you are wearing . Knowing how to facilitate and developing the needed skills will benefit both you and those you are leading .

Appendix C contains an outline of facilitation skills and details regarding problem solving skills, recording skills, and achieving consensus .

The principle of alignment

In The Power of Alignment, George Labovitz and Victor Rosansky do a wonderful job of defining and describing the principle of alignment . Alignment can be thought of as both a noun and a verb – a state of being and a set of actions . The real power of alignment is when we view it as a set of actions . These actions enable us to:

• Connect our employees’ behavior to our mission, turning intentions into actions• Link teams and processes to the changing needs of customers• Shape business strategy with real-time information from customers• Create a culture in which these elements all work together seamlessly

Alignment is a force . It focuses an organization and moves it forward . Jim Barksdale, the CEO of Netscape, captured the greatest challenge that managers face today–keeping their people and organizations centered in the midst of change–when he said, “The main thing is to keep the main thing the main thing!”

There are five basic steps in aligning an organization . They are deceptively simple, hard to implement, and even harder to sustain:

1 carefully crafting and articulating the essence of the organization and determining the Main Thing;

2 defining a few critical strategic goals and imperatives and deploying them throughout the organization;

3 tying performance measures and metrics to those goals;

4 linking these measures to a system of rewards and recognition; and

5 personally reviewing the performance of their people by top level managers to ensure the goals are met .

The goal is to create alignment between people, customers, strategy, and process . This is accomplished by staying balanced and focused as illustrated by Labovitz and Rosansky in the following analogy .

“If you’ve ever sat in the cockpit of a small plane as it makes its landing approach, you can appreciate the process of alignment . The pilot must sense and respond to a set of interactive variables that change as the plane makes its approach – with many things happening at once . Crosswinds affect the plane’s

Page 34: PB.ANUT-E

25

WhErE ArE WE hEAdEd? 2

orientation to the runway, which must be adjusted . Airspeed must be controlled with the flaps and throttle . The rate of descent, and pitch and yaw of the plane, too, must be adjusted as the plane moves down the glide slope . And there is substantial interaction among these many elements on landing . If the pilot manages them properly, the plane stays aligned with the runway and the glide slope, and it makes a safe landing . Like landing an airplane, aligning a department or an entire organization is an ongoing balancing act that involves setting direction, linking processes and systems, and making constant adjustments . Fail to adjust and you’ll drift . Over adjust and you’ll lurch from one side of your intended course to another .

The need for companies to reengineer is in many respects a consequence of past failures to make small, manageable adjustments on an ongoing basis . Alignment relies on two essential dimensions: vertical and horizontal . The vertical dimension is concerned with organizational strategy and the people we rely on daily to transform strategy into meaningful work . The horizontal dimension involves the business processes that create what the customer most values . Both of these dimensions must be in sync – independently and with each other . Once alignment is achieved, performance measures and proper management are needed to keep it that way .”2

Page 35: PB.ANUT-E

WhAT APPrOACh ShOuLd WE uSE?

26

3

“Genius is seeing what everyone has seen, and thinking what no one has thought.” - Albert Szent-Gyorgi

In what context do we use performance measurement?

When it comes to selecting an approach or model to measure “big picture” results, there are many potential answers . Odds are that some aspects of the approach will be dictated to you by your administration or by external agencies . There are pros and cons to each approach, and performance measurement fits into or can be a part of each approach .

Performance measurement is one element, one tool, one part of a performance management system .

Performance management:

The use of performance measurement information to set performance goals, allocate and prioritize resources, review results, communicate with all stakeholders, and reward performance .

The definition of the ideal results oriented system is yet to be written . The very concept is still evolving, and many different approaches have a reasonable chance of success . Whatever approach is taken, however, there’s a real hazard in the assumption that time-consuming measurement systems automatically lead to tangible benefits . Simply measuring outcomes is worthless if nobody uses the outcomes to manage . The best strategic plan has roughly as much value as a map in the glove compartment: Unless you actually use it for guidance, you are going to wind up lost out in the middle of nowhere .

Before selecting a performance management approach, you need to be clear about how performance measurement will align with other current initiatives . If you can’t easily explain how it all links together or draw the relationships on a single sheet of paper, it’s time to revisit where you are headed and how you are going to get there .

The following performance management approaches are summarized for your review . There are many resources available in the form of books, seminars, and the world wide web on each topic should you need to do an in-depth evaluation .

• Accountability• Balanced Scorecard• Baldrige National Quality Program• Benchmarking• Best Practices Review• Canada Awards for Excellence• Governmental Accounting Standards Board (GASB) Statement No . 34• Managing for Results• Management Practices Self Assessment and Accreditation• Performance Based Budgeting• Service Efforts and Accomplishment Reporting (SEA)• Strategic Planning• Total Quality Management

Page 36: PB.ANUT-E

27

WhAT APPrOACh ShOuLd WE uSE? 3

Accountability

There are many levels of accountability – from governments answering to their citizenry to personal accountability of each employee . From a functional perspective, accountability has been defined by the American Accounting Association in the form of a ladder comprising five distinct levels3:

Level 1: Policy accountability – selection of policies pursued and rejected (value) .

Level 2: Program accountability – establishment and achievement of goals (outcomes and effectiveness) .

Level 3: Performance accountability – efficient operation (efficiency and economy) .

Level 4: Process accountability – using adequate processes, procedures, or measures in performing the actions called for (planning, allocating, and managing) .

Level 5: Probity and legality accountability – spending funds in accordance with the approved budget or being in compliance with laws and regulations (compliance) .

There are both financial and non-financial accountability issues, and levels of measurement that range from objective to subjective .

What is needed for accountability may be different for each level, each program or service, and the situation within the agency and community .

If accountability is one of your agency’s values (from your mission, vision, values), then you need to define clearly what accountability means within your agency . The process for doing this can involve all levels of employees, managers and elected officials through focus groups, workshops, surveys, etc . Landing on a single definition that reflects the many perspectives of your employees can be a real challenge . Once this definition is developed, it should be communicated to all employees of the agency . Some agencies have developed workshops to discuss the concepts and engage employees in a dialogue .

The accountability program for one agency included several components:

Leadership accountability retreatProducts of the retreat included a definition of accountability, a catalog of reasons to be accountable, definition of accountability measures for leadership, and development of an accountability action plan .

Expectations workshopAs a result of the leadership accountability retreat, a workshop was held with a group of employees to define “expectations,” define the roles for which they needed to develop expectations, and create a list of expectations for each role .

Accountability videoA video was developed featuring the top administrative officer (a county executive in this case) . The purpose of the video was to present what accountability means to the organization . A companion brochure was developed highlighting the key points of the video and describing how to “walk the talk” in the organization .

Page 37: PB.ANUT-E

WhAT APPrOACh ShOuLd WE uSE?

28

3

Accountability focus groupsTo validate the content of the video, the brochure, and a proposed agenda for an accountability workshop for all employees, an employee focus group was conducted . Changes were made based on the feedback .

Accountability training program

All employees of the organization were invited to participate in an accountability workshop . The purpose of the workshop was to engage employees in a dialogue about accountability and what it means in the organization . This one-hour workshop was facilitated by internal facilitators .

Quarterly performance reportA new reporting process and format was developed to gather and report progress on organization-wide performance indicators and all department performance measures .

Management performance assessment systemA clearly defined list of expectations was developed for all directors, managers and supervisors in four areas – management of people, management of operations, management of issues, and customer/client relations . A performance contract was designed that defined the commitments being made for each year . At the conclusion of each year, an assessment of commitments versus accomplishments was made during each employee’s performance appraisal . The results influence the receipt or non-receipt of merit pay for the year .

Balanced Scorecard

In The Balanced Scorecard Robert Kaplan and David Norton defined this term as:

“The balanced scorecard translates an organization’s mission and strategy into a comprehensive set of performance measures that provides the framework for a strategic measurement and management system. The balanced scorecard retains an emphasis on achieving financial objectives, but also includes the performance drivers of these financial objectives. The scorecard measures organizational performance across four balanced perspectives: financial, customers, internal business processes, and learning and growth.”

The measures are balanced between the outcome measures – the results from the past efforts – and the measures that drive future performance . They are also balanced between objective, easily quantified outcome measures and subjective, somewhat judgmental, performance drivers of the outcome measures .

Many organizations are using this approach as a strategic management system to manage their strategy over the long run in these processes:

• clarify and translate vision and strategy;• communicate and link strategic objectives and measures;• plan, set targets, and align strategic initiatives; and• enhance strategic feedback and learning .

The process for building a balanced scorecard includes:

Define the measurement architectureSelect the appropriate organizational unit and identify the linkages .

Page 38: PB.ANUT-E

29

WhAT APPrOACh ShOuLd WE uSE? 3

Build consensus around strategic objectivesConduct first round interviews, have a synthesis session, and an executive workshop – round one .

Select and design measuresHave subgroup meetings, and an executive workshop – round two .

Build the implementation planDevelop the implementation plan, have executive workshop - round three, and finalize the implementation plan .

Baldrige National Quality Program

The criteria for performance excellence contained in the Baldrige National Quality Program (as defined in the Malcolm Baldrige National Quality Improvement Act of 1987 – Public Law 100-107) help organizations enhance their performance through focus on dual, results-oriented goals:

• Delivery of ever-improving value to customers, resulting in marketplace success; and• Improvement of overall organizational effectiveness and capabilities .

Although local, state, and federal government agencies are not eligible to receive the national quality award, elements of the criteria are insightful as follows:

Strategy Development (2.1)Describe your organization’s strategy development process to strengthen organizational performance and competitive position . Summarize your key strategic objectives .

• What is your strategic planning process?• How do you consider the following key factors in your process? Include how relevant data and

information are gathered and analyzed .

The factors:• Customer and market needs/expectations, including new product/service opportunities• Your competitive environment and capabilities, including use of new technology• Financial, societal, and other potential risks• Your human resource capabilities and needs• Your operational capabilities and needs, including resource availability• Your supplier and/or partner capabilities and needs

Strategic ObjectivesWhat are your key strategic objectives and your timetable for accomplishing them? In setting objectives, how do you evaluate your options to assess how well they respond to the factors most important to your performance?

Strategy Deployment (2.2)Describe your organization’s strategy deployment process . Summarize your organization’s action plans and related performance measures . Project the performance of these key measures into the future .

Page 39: PB.ANUT-E

WhAT APPrOACh ShOuLd WE uSE?

30

3

Action Plan Development and Deployment• How do you develop action plans that address your key strategic objectives? What are your key short

and longer-term action plans? • What are your key human resource requirements and plans, based on your strategic objectives and

action plans?• How do you allocate resources to ensure accomplishment of your overall action plan?• What are your key performance measures and/or indicators for tracking progress relative to your

action plans?• How do you communicate and deploy your strategic objectives, action plans, and performance

measures/indicators to achieve overall organizational alignment?

Performance Projection• What are your two to five-year projections for key performance measures and/or indicators? Include

key performance targets and/or goals, as appropriate .• How does your projected performance compare with competitors, key benchmarks, and past

performance, as appropriate? What is the basis for these comparisons?

Customer Satisfaction and Relationships (3.2)Describe how your organization determines the satisfaction of customers and builds relationships to retain current business and to develop new opportunities .

Customer Satisfaction Determination• What processes, measurement methods, and data do you use to determine customer satisfaction

and dissatisfaction? Include how your measurements capture actionable information that reflects customers’ future business and/or potential for positive referral . Also include any significant differences in processes or methods for different customer groups and/or market segments .

• How do you follow up with customers on products/services and recent transactions to receive prompt and actionable feedback?

• How do you obtain and use information on customer satisfaction relative to competitors and/or benchmarks, as appropriate?

• How do you keep your approaches to satisfaction determination current with business needs and directions?

Measurement of Organizational Performance (4.1)Describe how your organization provides effective performance measurement systems for understanding, aligning, and improving performance at all levels and in all parts of your organization .

• How do you address the major components of an effective performance measurement system, including the following key factors?

• Selection of measures/indicators, and extent and effectiveness of their use in daily operations• Selection and integration of measures/indicators and completeness of data to track our overall

organizational performance• Selection, extent, and effectiveness of use of key comparative data and information• Data and information reliability• A cost/financial understanding of improvement options• Correlation/projections of data to support planning• How do you keep your performance measurement system current with business needs and directions?

Page 40: PB.ANUT-E

31

WhAT APPrOACh ShOuLd WE uSE? 3

Analysis of Organizational Performance (4.2)Describe how your organization analyzes performance data and information to assess and understand overall organizational performance .

• How do you perform analysis to support your senior executives’ organizational performance review and your organizational planning? How do you ensure that the analysis address the overall health of your organization, including your key business results and strategic objectives?

• How do you ensure that the results of organization-level analysis are linked to work group and/or functional-level operations to enable effective support for decision making?

• How does analysis support daily operations throughout your organization? Include how this analysis ensures that measures align with action plans .

Organizational Effectiveness Results (7.5)Summarize your organization’s key operational performance results that contribute to the achievement of organizational effectiveness . Include appropriate comparative data .

• What are your current levels and trends in key measures and/or indicators of key design, production, delivery, and support process performance? Include productivity, cycle time, and other appropriate measures of effectiveness and efficiency .

• What are your results for key measures and/or initiatives of regulatory/legal compliance and citizenship? What are your results for key measures and/or indicators of accomplishment of organizational strategy?

Benchmarking

Benchmarking is a process for identifying and importing best practices to improve performance . It is not just a simple comparative study or simply copying from other agencies . A good benchmarking analysis produces two types of information: quantitative data that are used to measure performance by and to set future targets; and qualitative information on the key success factors that explain how the benchmarked agency became best-in-class in that function .

The following steps will help ensure that a best practice is successfully imported:

• Determine the purpose and scope of the project• Understand your own process• Research potential benchmarking partners• Choose performance measures• Collect internal data on performance measurements• Collect data from the partner agency• Conduct a gap analysis• Import practices to close performance gaps• Monitor results• Recalibrate based on findings• Start the search anew

Best Practices review

A best practices review is a systematic study of variations in service level and design, work processes, and services among similar agencies to identify practices that are cost effective and that might be adopted by

Page 41: PB.ANUT-E

WhAT APPrOACh ShOuLd WE uSE?

32

3

other agencies . Practices refer to the processes, practices, and systems identified that when performed exceptionally well are widely recognized as improving an agency’s performance and efficiency in specific areas . At their core, best practices reviews involve examining the performance of a program or function in an agency, identifying practices associated with higher levels of performance, and sharing those practices with potential adopters .4

A best practices review includes five major activities:

• Systematic analysis of service delivery• Improving the understanding of measuring performance• Creating a database of effective services• Enhancing communication• Linking up with other management tools

Canada Awards for Excellence Program

The National Quality Institute (NQI) promotes awareness and education of quality principles and practices, and recognizes excellence through the Canada Awards for Excellence program .

The NQI developed the Canadian Quality Criteria for the Public Sector as a framework for effective public service organizations and agencies at all levels . The emphasis is on achieving citizen-focused service delivery .

The public sector criteria related to performance measurement include:

Leadership• A mission and mandate statement is in place and has been communicated to all levels in the

organization .• Key success factors and priorities have been determined and are linked to strategic direction: for

example, the accountability for the organization .• Strategic planning incorporates ambitious objectives necessary to achieve the mission and mandate,

and is communicated to all levels in the organization .• Implementation of strategic planning is monitored and reviewed .• Responsibility, accountability and leadership for improvement are shared throughout the

organization .• Indicators of effectiveness of leadership in setting strategic direction and demonstrating leadership in

the quality principles .• Indicators of the level of understanding in the organization, of the mission, mandate and strategic

direction .• The organization evaluates and works at improving its approach to leadership .

Planning• Key improvement issues have been identified, prioritized, measured and improvement goals set,

including any actions regarding external partnering arrangements for the delivery of client services .• Formal assessments, using criteria that reflect quality principles, are conducted to determine the

organization’s strengths and opportunities for improvement .• The organization analyzes assessment findings to help determine priorities for improvement .• Indicators of the degree of understanding, throughout the organization, of the priorities and goals

established in the improvement plan .

Page 42: PB.ANUT-E

33

WhAT APPrOACh ShOuLd WE uSE? 3

• Indicators of effective implementation of the improvement plan throughout the organization .• Levels and trends in quality assessment findings (for example, ratings and/or scores) .

Citizen/client focus• Information is gathered, analyzed and evaluated to determine client/stakeholder needs, including

evaluation of potential partnering and/or third party service delivery arrangements .• There are methods and processes in place that make it easy for clients/stakeholders to provide input

on their needs, seek assistance and make complaints .• The organization measures client/stakeholder satisfaction to gain information for improvement .• Levels and trends of performance in dealing with client/stakeholder inquiries and complaints are

compared to established service delivery standards .• Levels and trends are examined in regard to client/stakeholder appeals, and, when applicable, in

regard to product-related areas such as refunds, repairs, and replacement .

People focus• The organization determines training and development needs to meet goals in the improvement

plan, and responds to these needs .• The organization evaluates the effectiveness of training and development programs .• The organization measures people satisfaction at all levels, and links the feedback to future

improvement opportunities .• Indicators of the effectiveness of training and education, particularly of quality improvement

principles and methods are examined .• Indicators of involvement levels in improvement activities that link directly to the goals and

objectives of the organization are determined .• Indicators of awareness and involvement in addressing issues related to well being; for example,

health, safety and environmental concerns are identified .• Levels and trends of employee suggestions and ideas are submitted and implemented .• Levels and trends in employee turnover rates, absenteeism, and grievances are evaluated .

Process management• Key processes are analyzed to determine opportunities for continuous improvement, through

incremental refinement and/or fundamental redesign, including potential for reallocation of service delivery .

• External information is gathered and used to compare performance and to identify opportunities/ideas for improvement .

• Indicators of the effectiveness of the design process for new services and/or products, such as cycle times and frequency of process design changes are determined .

• Levels and trends in process capability and cycle time for key service delivery and/or production processes are examined .

Supplier/partner focus• Levels and trends of suppliers/service providers in their process capabilities and cycle times are studied .• Levels and trends in the quality and value of provided services and/or products are identified .

Comparative Performance Measurement

The ICMA Center for Performance Measurement provides a means for local governments to share data on a range of programs, benchmark their performance to comparable jurisdictions, and improve service

Page 43: PB.ANUT-E

WhAT APPrOACh ShOuLd WE uSE?

34

3

delivery through the application of best management practices and efficient use of resources . A major purpose of the effort is to provide a common platform for benchmarking of services and to identify governmental practices that may contribute to high performance that might be adapted by other jurisdictions .

A consortium of cities and counties asked ICMA to coordinate this work and to assist in selecting services to be measured, identifying desired outcomes of service delivery, defining indicators, and collecting data . More than 120 agencies participate in the consortium .

The service areas that apply to public works include:

• Refuse collection – collection, recycling• Road maintenance – road condition, operating and maintenance expenditures, street sweeping• Fleet management – maintenance and replacement by vehicle type

Three types of indicators are used: outcome indicators, efficiency indicators, and input indicators .

Governmental Accounting Standards Board (GASB) Statement No. 34

The Governmental Accounting Standards Board issued Statement No . 34, Basic Financial Statements and Management’s Discussion and Analysis for State and Local Governments in June 1999 . This statement established new financial reporting requirements for state and local governments throughout the United States . When implemented (June 2001, 2002 or 2003, depending upon annual revenues), it will create new information and restructure much of the information that governments have presented in the past .

For the first time, financial managers will be asked to share their insights in a required management’s discussion and analysis (MD&A) . This will give an objective and easily readable analysis of the government’s financial performance for the year . This analysis should provide users with the information they need to help them assess whether the government’s financial position has improved or deteriorated as a result of the year’s operations . The MD&A will include:

• Comparisons of the current year to the prior year using the new government-wide financial statements (described below)

• An analysis of the government’s overall position and results of operations• An analysis of significant changes that occur in funds and significant budget variances• A description of capital asset and long-term debt activity during the year• A description of currently known facts, decisions, or conditions that are expected to have a significant

effect on financial position or results of operations .

New government-wide financial statements will be required, using accrual accounting and the economic resources measurement focus, for all of the government’s activities . It will report all revenues and all costs of providing services each year . These annual reports are expected to provide a new and more comprehensive way to demonstrate stewardship in the long term, the short term, and through the budgetary process .

Use of this new information from a performance measurement perspective needs to be assessed by each agency . It will potentially be very useful if the performance information is aligned with this new data .

Page 44: PB.ANUT-E

35

WhAT APPrOACh ShOuLd WE uSE? 3

Managing for results

The managing for results process is a comprehensive approach to focusing an agency on its mission, goals, and objectives . It establishes the accomplishment of those goals and objectives as the primary endeavor for the organization, and provides a systematic method for carrying out the endeavor . It requires the establishment of performance measures and the use and reporting of those measures so that management, elected officials, and the public can assess the degree of success the organization is having in accomplishing its mission, goals, and objectives .

The stages of the managing for results process are:

Strategic planningStrategic planning includes defining the program, identifying needs to be addressed, establishing the program purpose and mission, establishing the managing for results process, and assigning accountability .

Program planningProgram planning includes identifying and setting goals and objectives, identifying outcomes, assessing the ability to address needs, prioritizing goals and objectives, evaluating feasibility of making a difference, deciding on a preliminary strategy, establishing management systems, and identifying outputs, benchmarks and baseline measures .

Setting priorities and allocating resourcesThis includes developing a budget, prioritizing requests, identifying sources of revenue, feedback on priorities, support for program requests, submission of requests, analysis of requests, and decisions on requests .

Activity planning and organizationActivity planning and organization includes assessing allocated resources, establishing or modifying strategies, outputs, action plans, processes and activities, delegation of duties and authority, establishing interim goals and objectives, establishing diagnostic measures, linking resources and outputs and outcomes, and activity-based costing .

Management of operationsManagement of operations includes the management system, management philosophy, communications links, regular feedback on results, contingency planning, controlling costs and quality, and producing goods and services .

Monitoring operations and measuring resultsMonitoring includes obtaining information on results, understanding factors affecting results, services provided by agency, services provided by others, explanatory factors, secondary effects, using diagnostic measures, and monitoring costs and revenues . Analysis of, reporting, and obtaining feedback on resultsAnalysis includes long and short-term results, explanatory factors, strategies and outputs, verification of performance information, budget versus actual results, performance evaluations and audits, and communication of results to parties to whom the program is accountable .

Page 45: PB.ANUT-E

WhAT APPrOACh ShOuLd WE uSE?

36

3

Obtaining feedback on resultsFeedback includes citizen and service recipient surveys, and feedback on results .

Management Practices Self Assessment and Accreditation

The American Public Works Association (APWA) has developed a self assessment program and an accreditation program to enhance the effectiveness of agencies and their competencies in the public works field . The Public Works Management Practices Manual contains over 450 recommended practice statements that describe the critical elements necessary for a full-service public works agency to accomplish its mission . These statements provide general guidance on what a public works agency should be doing – not how it should be done . This non-prescriptive approach allows each agency to tailor its practices to meet local conditions, be they organizational, geographic, climatic, political, or community-related issues . The statements are not designed to be standards .

Self assessment using the Public Works Management Practices Manual is a tool for determining how an agency’s policies, procedures, and practices compare to recommended practices identified by nationally recognized experts in public works . Agencies can use the recommended practices as a model for developing or improving existing practices, to enhance performance, increase productivity, and strengthen employee morale .

Accreditation is the final step in a process to recognize public works agencies that have conducted strategic planning using the recommended management practices established by APWA . Accreditation is strictly voluntary . Some agencies complete the self assessment and decide not to proceed with accreditation .

The self assessment process includes the following major steps:

• ensuring the four essential ingredients are present: committed leadership all the way to the top, a high degree of employee involvement, an organized system of documentation, and commitment to improvement;

• becoming familiar with the Public Works Management Practices Manual;• preparing a proposal;• selecting a manager;• preparing an operational plan;• developing a project budget;• setting up a filing system for documentation;• training all staff that will be involved;• orienting all staff regarding the process;• gathering information on each management practice;• determining compliance with each management practice;• keeping track of progress;• reviewing documentation;• inviting peer review if desired;• presenting results to the elected governing body; and• receiving recognition from APWA .

Page 46: PB.ANUT-E

37

WhAT APPrOACh ShOuLd WE uSE? 3

The accreditation process includes five phases:

1 Application .

2 Self Assessment – as described above .

3 Improvement – improving in all areas as identified in the self assessment .

4 Evaluation – on-site evaluation team of peers .

5 Accreditation – recognition, continuous improvement, reaccreditation .

Performance Budgeting5

A performance budget is a budget format that includes performance goals and objectives, and demand, workload, efficiency, and effectiveness (outcome or impact) measures for each program .

Program budgeting is an integral part of performance budgeting . Instead of dividing expenditures along departmental lines, a program budget classifies expenditures into groups of activities that are designed to achieve a common purpose . A performance budget provides information on how well these activities are carried out . So performance budgets address costs per unit (e .g . cost per pothole filled) and also ask:

• Whether services are being provided at an agreed-upon level of quality?• Whether programs are achieving their goals?

The benefits of program and performance budgets include:

• providing the ability to monitor and improve productivity;• making linking of performance to budget allocations possible;• holding managers accountable for a given level of outputs and outcomes (as opposed to being held

accountable for the consumption of inputs); and• allowing for more decentralized decision making and more creative management .

There are four stages to implementing performance budgeting:

1 Creating performance measures .

2 Linking performance measures to budget allocations .

3 Reporting performance accomplishments .

4 Institutionalizing the process .

Service Efforts and Accomplishment reporting (SEA)3

In April 1994, the Governmental Accounting Standards Board (GASB) issued Concepts Statement No . 2, Service Efforts and Accomplishments Reporting . This statement focused on one of the objectives from GASB Concepts Statement No . 1, Objectives of Financial Reporting .

Concepts Statement No . 1 said that “financial reporting should assist in fulfilling government’s duty to be publicly accountable and should enable users to assess that accountability .” In order to accomplish that objective, financial reporting needs to provide information that will assist users (a) to determine

Page 47: PB.ANUT-E

WhAT APPrOACh ShOuLd WE uSE?

38

3

whether current-year revenues were sufficient to pay for current-year services, (b) to determine whether resources were obtained and used in accordance with the entity’s legally adopted budget, and (c) to assess the service efforts, costs, and accomplishments of the governmental entity .

Concepts Statement No . 2 further develops the last of these three objectives of financial reporting . It begins with a discussion of the governmental environment and the need for performance measurement reporting, explores the dimensions of governmental accountability, sets forth the elements of performance measurement reporting, elaborates the objectives and characteristics of performance information, and discusses the limitations of performance information and how to enhance its usefulness . Finally, it calls for experimentation with performance measurement and reporting and states that performance measurement reporting is considered an essential part of comprehensive financial reporting for state and local governments .

Concepts Statement No . 2 asserts that information provided by governments should be intended to assist in decision making during the public administration system and budgetary cycle . Therefore, ideally a governmental entity should:

• Establish and communicate clear, relevant goals and objectives• Set measurable targets for accomplishment, and• Develop and report indicators that measure its progress in achieving those goals and objectives

(measures of performance) .

For example, for governmental entities to have appropriate information for making decisions and assessing accountability, information must be provided about results achieved (service accomplishments or performance) through the use of the resources provided (service efforts) and how those results compare with what was planned . The terms economy, efficiency, and effectiveness often are used to describe the categories of performance information needed .

Strategic Planning

APWA chose the term “Strategic Planning for Public Works Agencies” to describe a process that includes developing an agency’s mission, vision and values statements, conducting a needs assessment using the Self Assessment process, establishing goals for improvement, developing an implementation plan and successfully attaining the agency’s goals . An agency that completes the entire strategic planning process is eligible to apply to APWA for agency accreditation .

Voluntary agency accreditation is not the end of the process . Accredited agencies should use the strategic planning process to provide a framework for continuous improvement of their policies, practices and procedures .

In Strategic Planning for Local Government published by ICMA, strategic planning is defined as:

Strategic planning in local government is a systematic process by which a community anticipates and plans for its future. The result is a written document that guides the community toward its future goals.

A formal strategic planning process has the following results:

• a mission statement for the organization;

Page 48: PB.ANUT-E

39

WhAT APPrOACh ShOuLd WE uSE? 3

• an environmental scan and conclusions about future scenarios in a three- to five-year period;• basic goals for the time period in the scan, and goals for the coming one-year period;• strategies and action steps that will move the organization toward the goals; and• implementation plans that assign responsibilities for action steps .

Total Quality Management

Total quality management (TQM) is a system and set of procedures for performing operations analysis on a continuous basis . TQM relies on quantitative methods and the measurement of organizational activities to seek out ways to improve services and products continuously . The philosophy of total quality management is that providing quality services results in increased customer demand, and that continuous monitoring of quality is less costly than having to repair defects and incurring customer ill will . The hallmark of total quality management is a focus on the customer and on labor: asking the customer to define quality and viewing labor as an investment .

The primary tool of total quality management is “quality circles,” composed of members of labor and management that analyze a problem or function and recommend improvements . Quality circles include staff with the responsibility or expertise necessary to contribute to solving the problem at hand . Participants base their decisions on carefully gathered statistics .

TQM uses performance measurement data as baseline data for conducting a more detailed analysis of a particular program or process .1

And there’s more

These thirteen approaches are not meant to be all inclusive or suggested as “the answer .” Each approach has received or is receiving attention in the performance measurement airwaves . There are other approaches currently available and there are certainly more to come .

Page 49: PB.ANUT-E

OrGANIZING ThE PErfOrMANCE MEASurEMENT EffOrT

40

4

“Never tell people how to do things. Tell them what you want them to achieve, and they will surprise you with their ingenuity.” - General George S. Patton

Appoint a performance measurement program manager

Every program needs a program manager – even performance measurement programs . This individual needs to possess (or have the capability to obtain) the following qualifications and characteristics to be effective:

Qualifications• Knowledge of performance management, performance measurement and other improvement

approaches .• Project management skills .• Belief that performance measurement is both good for the individual and the agency .• Understanding of the many levels and decision making dimensions of the agency .• Knowledge of systems analysis and flow charting .• Facilitation skills .• Knowledge of communications planning, report writing and graphical presentations .• Respect from within the agency and organization .

Characteristics• Asks tough questions by: focusing on what needs to be asked, not what they want to say; asking

questions as a tool to help gain understanding .• Stays connected with those involved by: remaining accessible; being receptive to bad news; and

passing good news up the line .• Listens and communicates effectively by: communicating more precisely – gets to the point; and gets

back to them quickly . • Encourages a climate of innovation and creativity; rewarding behavior that leads to improvements;

encouraging people to ask “why” they do everything .• Works on controlling things (processes, schedules, etc .), not on controlling people .

The responsibilities of the performance measurement program manager include:

• Lead the project team – set agendas, handle logistics, notify attendees, prepare meeting results, etc .• Develop a work plan – tasks, timelines, accountability .• Develop a communications plan .• Facilitate discussions as needed (or seek an outside facilitator if appropriate) .

Start with a performance measurement team and work plan

Depending upon on the size of the agency, each program may need a work group or performance measurement team to oversee development and implementation of the performance measures . Members could include:

Page 50: PB.ANUT-E

41

OrGANIZING ThE PErfOrMANCE MEASurEMENT EffOrT 4

• Program Manager• Members of the program staff• Representatives of related programs within the agency• A representative of the agency’s central office• A representative of the budget office• A person knowledgeable about information processing, flow charting, and the agency’s information

systems

Working groups of 8 to 12 are usually an effective size, but fewer will work if appropriate . For particularly complex programs, groups may need to be larger . The frequency of meetings is dependent upon the timeline . Initially, regular meetings are important . And this is not a short-term project – it takes three to five years to fully implement a performance measurement system .

The work plan for the performance measurement team should include6:

1 Establishing the purpose and scope of the team .

2 Identifying the mission, goals, objectives, and clients of the program (see Chapter 2) .

3 Developing a communications plan to announce the program and to detail ongoing communication needs .

4 Training the work group on performance measurement tools and techniques .

5 Identifying the results (outcomes) that the program seeks .

6 Mapping out the current processes .

7 Holding meetings with interest groups, including customers, in order to identify outcomes desired from a variety of viewpoints .

8 Selecting specific indicators for measuring each outcome and other appropriate measures .

9 Setting baselines and performance targets .

10 Identifying appropriate data sources for each indicator and the specific data collection procedures needed to obtain the data . Developing data collection instruments .

11 Identifying the specific breakouts needed for each indicator, such as breakouts by customer demographic characteristics, organizational unit, geographical location, type of approach used, etc .

12 Identifying appropriate benchmarks against which to compare program results .

13 Developing an analysis plan – ways that the performance data will be examined to make the findings useful .

14 Selecting formats for presenting the performance information that are informative and user-friendly .

15 Determining the roles that any program partners (such as project grantees and contractors) with substantial responsibility for service delivery should play in developing and implementing the performance measurement process .

16 Establishing a schedule for undertaking the above steps, for pilot-testing the procedures, and for making subsequent modifications based on the pilot results .

17 Planning, undertaking, and reviewing a pilot test of any new or substantially modified data collection procedures .

18 Preparing a long-term schedule (typically about three years) for implementation, indicating the timing of data collection and analysis relevant to each year’s budgeting cycle, and the persons responsible for each step in the process .

Page 51: PB.ANUT-E

OrGANIZING ThE PErfOrMANCE MEASurEMENT EffOrT

42

4

19 Identifying the uses of the performance information by agency staff (such as in budgeting and helping improve programs) .

Identifying programs to measure

The various activities that your agency carries out should be clearly identified and divided into distinct programs . Programs are groupings of routine activities aimed at providing support for a certain service . Groupings of individual activities make up a program . For example, these four activities—street patching, crack sealing, seal coating, and street resurfacing—are activities that constitute the program that is usually called street maintenance . Usually, programs are listed on an organizational chart contained in the operating budget . Programs relate directly with the organizational structure and the managerial areas of responsibility . Programs also correspond with the expenditure classification structure; that is, with the formally recognized cost centers .

Choosing which programs to measure is a matter of judgment . On the one hand, programs should not be too few, so that only a tiny portion of services is covered or the information collected is insufficient . On the other hand, too much reporting can be excessively costly, overwhelming and impractical because it is not followed through . Performance measurement systems work best when they concentrate on collecting limited but essential information about basic programs that need the most managerial oversight and where accountability reporting is most important .

developing a program mission statement

Developing a well-articulated mission statement for a program is a very important step . We can measure the performance of a program only if we know what the program should accomplish . A clear mission statement is the best beginning . See Chapter 2 for a process and guidelines for developing mission statements .

Identifying program performance measures

The primary focus is on outcomes . However, other types of measures should be considered if they work well . Program inputs are expressed through expended money in an operating budget and the number of man-hours or full-time equivalent employees . Outputs usually represent workload measures, or the quantity of the delivered service to the users . The unit of output—the quantity of each service (output/activity) being produced by the program – can be included . Efficiency measures the cost (whether in dollars or employee hours) per unit of output or outcome, while productivity indicators combine the dimensions of efficiency and effectiveness in a single measure . For example, in a sanitation department, input indicators will be the amount of labor-hours, the budget of the department, number of vehicles operated, etc . Output measures will include tons of refuse collected, miles of roads cleaned, number of customers served, etc . Efficiency indicators will include labor-hours per ton of refuse collected, dollars spent for one mile of snow removal; and productivity indicators will include measures such as cost per mile of a clean street (i .e ., total cost of all road cleaning divided by the total miles of clean streets) .

Page 52: PB.ANUT-E

43

OrGANIZING ThE PErfOrMANCE MEASurEMENT EffOrT 4

Setting targets for accomplishment

We need to specify the conditions that determine the goals of the program have been met . Identify service effectiveness and quality, and explicitly state how we are going to check whether our objectives have been met . Usually, this is done by expressing the goals of a program through date (or period of time) and quantities (including percentages) . For example, for a street cleaning program of the sanitation department, this may mean achieving by the end of the fiscal year 75% cleanliness rating in citizen surveys, and 80% cleanliness rating by trained observers . Targets for accomplishment are not necessarily confined to effectiveness indicators . Agencies can similarly set efficiency and productivity targets, or input/output targets . For example, in the case of resource scarcity, input targets may be very important . In the same street cleaning program, the inputs might be the number of FTEs, the program budget, and the number of street sweepers available .

Training program leaders

If the performance measurement program manager doesn’t coordinate all of the program teams, program leaders need to be trained to do so . The performance measurement program manager should lead this training .

developing a communication plan

The primary goal of your communication plan is to create an understanding of and support for the performance measurement program . Effective communication will enlist the involvement and support of employees, managers, elected officials, and citizens . The tools (surveys, meetings, etc .) used to get buy-in and to connect with your community will be defined and described in your communication plan .

A good communication plan answers the following questions:

• What are our communication objectives?• Who is our audience?• What are our assumptions?• How do we test our assumptions?• What don’t we know?• How will we measure success in accomplishing the objectives?• What’s the message?• How do we position the message?• What tools will be used to deliver the message?• What is our implementation strategy?• How does this align with everything else that’s going on in our agency?

Page 53: PB.ANUT-E

MAPPING OuT CurrENT rEPOrTING PrOCESS

44

5

“The worst thing we can do is do the wrong thing well.” — Irwin Miller

Why is mapping important?

In step one of the readiness survey (Chapter 1), the first question is, “Is our agency already engaged in some sort of performance measurement process? What other initiatives are underway or are on the drawing board?” Mapping out all of the initiatives is very important .

By listing the initiative, timeline, and who’s involved you start with some clarity on what is already on the plate and the competition for your limited resources . With this as the beginning point, you are ready to analyze your processes .

There are several questions that need to be answered for each process .

• What are the current processes?• To whom is the information given?• When is the information generated?• What sources are used?• What systems are used?

using flowcharts

Flowcharting is a tool that helps to graphically display each process, its relationships, boundaries, cycle time, and decision locations . When using flowcharting, it is important to:

• visualize the process–talk yourself through it;• list the major steps and then arrange them in sequence;• draw simple, easily understood symbols around or next to each step;• connect the symbols with flow arrows; and• select those areas that need further details .

You should be asking the following questions when you are flowcharting:

• Who is doing the process?• What is being done?• When is it being done?• Where is it being done?• How much time does it take?

Page 54: PB.ANUT-E

45

MAPPING OuT CurrENT rEPOrTING PrOCESS 5

There are many types of flowcharts . The most common include:

Basic flowchart

The basic format shows different steps or actions represented by boxes or other symbols . This format can be used effectively to understand the major steps in a complex process . It is a good idea to begin with a basic flowchart .

Top-down flowchart

The top-down format provides an overview of a process and its major sub-steps or sub-processes . Beginning with the basic format, add the details for each major step . It is helpful to number the detailed steps in a sequential manner to make it easy to follow the relationship between the “parent” process and the sub-processes .

1 .1 Sub-step 1 2 .1 Sub-step 1 3 .1 Sub-step 11 .2 Sub-step 2 2 .2 Sub-step 2 3 .2 Sub-step 2 2 .3 Sub-step 3

STEP 1 STEP 1 STEP 1

STEP 1 STEP 1 STEP 1

Page 55: PB.ANUT-E

MAPPING OuT CurrENT rEPOrTING PrOCESS

46

5

Detailed flowchart

When a detailed understanding is needed for a process, additional symbols are used to understand the particular activity that is occurring in each step . The standard set of symbols follow the example below .

Activity: The activity symbol is a rectangle that designates an activity . Within the rectangle is a brief description of the activity .

Decision: The decision symbol is a diamond that designates a decision point from which the process branches into two or more paths . The path taken depends upon the answer to the question that appears within the diamond .

Flow line: The flow line represents a process path that connects process elements, e .g ., activities, decisions . The arrowhead on a flow line indicates direction of a process flow .

Connector or Link: The connector or link is a small circle that is used to indicate a continuation of the flow chart on to another column or page . Insert an alpha (a, b, c) or a numeric (1,2,3) in the circle when you come to the end of the page or column . Use that same alpha or number in another circle on the next page or column to continue the process flow . The link can also be used to indicate a place where the process moves to another program or activity .

Terminal: The terminal symbol is a rounded rectangle that identifies the beginning or end of a process . Within the symbol is a brief description of the activity that begins or ends the process .

Document: The document symbol represents the production of a paper document such as a form, report or letter that is pertinent to the process .

Electronic information: The electronic information symbol indicates the use of automation or an electronic transmission .

1.1

1.2

1.3 1.3.1

1.3.21.4

1.5

1.6

1.7

Page 56: PB.ANUT-E

47

MAPPING OuT CurrENT rEPOrTING PrOCESS 5

Exploring possible improvements

Once you have a good understanding of the process, the next step is to explore the possibilities for improvement . The following questions will help you frame the analysis:

• How is the process impacting the program’s performance? • Are there changes that would have a positive impact on the program? • Are there too many hand-offs to different people or other processes? • Are there particular steps where errors are occurring? • Are there too many or too few decision points? • Can any steps be eliminated? • Can any steps be automated to improve the process?

Page 57: PB.ANUT-E

GETTING Buy-IN

48

6

“Few things are harder to put up with than a good example.” - Mark Twain

recognizing the major obstacle

One significant obstacle to getting buy-in is the idea that performance measurement information will somehow wind up driving government policy . This assumption is wrong . At best, good performance measurements, like solid strategic planning, merely provide guides for the political process . This may help or hinder your performance measurement process . However, it is important to be up-front about this so everyone understands .

Addressing all the key questions

The first three questions your employees are going to think, but probably won’t verbalize, when you announce your performance measurement process (or any other major initiative for that matter) will be “me-based”:

• What’s my job?• How am I doing?• Does anybody care?

Until these questions are addressed, employees are not able to move on to the next levels . The fourth and fifth questions are more “we-focused”:

• How are we doing?• Where do we (our team) fit in?

The sixth question is where employees truly get involved:

• How can I help?

We face a real challenge when we take “macro” messages developed at the organization or agency level and try to translate them into how people do things differently – behaviorally, attitudinally, and functionally – at the work level . This is where most management improvement efforts break down .

Not answering these questions, particularly when you are talking about accountability, performance, potential direct impacts on budgets and pay, will create significant obstacles - resistance and even organized efforts to get rid of the program .

Connecting with your employees, labor groups, managers and elected officials

Involving front-line supervisors from the beginning is critical . Without the front-line supervisor’s input, understanding and buy-in for the effort will not succeed . Likewise, if you exclude union leaders until the end of the process, they will not support or buy into the program . If you spring requests for

Page 58: PB.ANUT-E

49

GETTING Buy-IN 6

information or involvement onto other departments like Human Resources, Finance, and Information Systems when the system is “ready to go,” you should expect resistance and a refusal to dedicate limited resources toward your effort . And if you just start sending new reports to the Mayor, Manager, Executive and Council without laying the groundwork, you will end up with lots of questions, concerns, anger and embarrassment .

Buy-in cannot be over-sold . You need to have a plan that ensures all of these (and maybe others) have an understanding of what you are trying to accomplish, what the benefits are, what resources this will require (that can’t be used for other things), and what improvements to services will result . Briefings, presentations, feedback mechanisms, and progress reports are valuable throughout the process .

Committing to full disclosure

Some agencies find that a commitment to “full disclosure” is very useful – that means sharing the bad with the good, no sugar coating . Being forthright and direct is a must . It is a big mistake to withhold information . The rumor mill will beat you every time – and a lot of times the rumor mill is true .

Communicating directly with employees

One of the ways to beat the rumor mill is through direct communication . Face-to-face communication builds trust and credibility while ensuring that employees are receiving consistent messages . The best solutions for employee communication will always involve interactions, involvement of the various groups, and engaging people in discussion . When face-to-face isn’t possible, all employees should receive the same information via the appropriate electronic or hard copy media .

Communicating just once isn’t enough in today’s fast-paced, multi-channel communications environment . The broken-record approach needs to be utilized . The key is to use a combination of channels – face-to-face, print and electronic media . Each employee needs to be touched at least two or three times with the message, and in more than one way . Multiple tools, multiple channels, multiple ways – the more opportunities you can provide for employees to receive key messages, the more likely you are to connect with them .7

Page 59: PB.ANUT-E

CONNECTING WITh ThE COMMuNITy

50

7

“When a man say him do not mind, then him mind.” — Southern proverb

how do we find out what the needs of our community are?

You are already getting feedback from your citizens about what they want and what they think . All public works agencies are very accustomed to the public meeting and public hearing process . In many cases we are asking our citizens to “react” to a proposed plan, project, proposal, or policy change . This is an important process and will likely continue to be the mainstay of public interaction in many communities .

There are other dimensions to “connecting with our community” that are more proactive . The importance of citizen involvement cannot be understated . Involvement can help build a sense of responsibility for the community . It can encourage greater interest in the governance of community and in the results of that governance .

Typically, citizens focus on results: Is government getting the job done? Is the job being done fairly and ethically? Does the end result provide value for the public money spent? We need to be able to answer these questions in a way that is unbiased and believable . On the other hand, what can we actually measure that provides tangible results to address these questions?

We can focus on service quality as a performance measure . When we do this we must also turn our attention to measuring customer satisfaction . To measure customer satisfaction, we must ask our customers what they think . There are a number of ways to do this . The approach(es) you select will take into account the purpose of gathering the information, the timeline and resources available, and the effectiveness of the approach based on your experience with the community .

Not all public works agencies have a staff of public involvement specialists . External consulting services may be an option; however, much of the work still needs to done by agency staff . Whenever engaging in a “connecting” effort, it is important to do it well . The image of the agency, its relationship with the citizens, and the impact on elected officials are all at stake at each one of these events . Doing them well or not at all is wise counsel .

The following connecting approaches can be considered .

Customer Councils

Feedback is given by a group considered representative of the customer of the agency or service . This type of feedback is useful when there are ongoing issues that customers may discuss among themselves, and feel that others can refine and reflect their views, and when customers may prefer to speak with other customers rather than approaching the agency directly .

Customer Interviews

A selected group of customers is interviewed systematically . This is similar to a focus group, but individuals are interviewed separately using open-ended questions . It provides greater in-depth information than a survey, but lacks the opportunity to delve into sensitive areas or new issues as well as in a focus group . This approach is useful for explorative research .

Page 60: PB.ANUT-E

51

CONNECTING WITh ThE COMMuNITy 7

focus Groups

In-person interviews are used in a small group setting with a relatively homogeneous group of individuals selected for an in-depth discussion . This is useful for testing a survey for new questions, or testing a message designed to change opinion, or when addressing a new or possible controversial area or service where reliable measures have not been validated .

Telephone hotlines

A telephone number is provided to the public in order to provide comments, suggestions, etc . While not a statistically valid representation of the population, it does provide a forum to receive feedback on services .

Town Meetings

Town meetings are typically one or more elected officials meeting with constituents to get input/feedback on issues relevant to their district . The scope, format and intent of this approach can vary and change as desired .

Surveys

From a performance measurement perspective, surveys can offer the results desired to assess customer satisfaction . There are many approaches to surveying and much to consider .

Appendix d contains details regarding surveys in the following areas:

• Why survey?• The American Customer Satisfaction Index• What do you want to achieve with your survey?• Types of surveys• What resources are required?• What needs to be considered when writing survey questions?• Rules for constructing a survey• Survey design• Pretesting

Appendix E contains definitions for the following surveying terms:

• Bias• Data• Population• Pretest• Proxy or Surrogate Measures• Respondent• Service Quality• Sample• Survey

Page 61: PB.ANUT-E

GETTING CLEAr ON PErfOrMANCE MEASurEMENT dEfINITIONS

52

8

“The definition of insanity is doing the same thing the same way but expecting a different result.” - Michael Tamborrino

Many terms to consider

Many terms need to be defined as you develop your performance measurement system . There are also many different definitions for each of these terms, although they are similar in nature . What is important is that you clearly define all of the terms that apply for your agency – not all need apply . To assist you in this effort, alternatives are provided below for each of the most common terms .

Three broad categories of performance measures measure service efforts, measure service accomplishments, and those that relate efforts to accomplishments . Performance measures should be reported for services your agency is responsible for providing, whether your agency provides the service directly or contracts it out . Additional explanatory information should also be provided as appropriate .

Use these definitions to create your own . The process you employ to create your definitions can range from you deciding what they are to a process that involves employees and citizens . Only you can decide what is most advantageous for your agency and its performance measurement process .

After the definitions are developed, they should be reviewed and approved by the mayor, manager, administrator, or executive and the appropriate council or board . This process will ensure that everyone is on the same page from the beginning . Differences of opinion on the definitions can be a major stumbling block when you report results later . You can also advertise the fact that there is agreement on the terms (as appropriate) .

fully defining each performance measure

A definition of each performance measure needs to be created so it can be understood by everyone who views it . A typical definition includes:

• Program name• Program manager name• Program goal• Type of measure• Performance measure• Specific base-line• Performance goals/targets• Data requirements• Frequency and source of data• Actual calculation method• Definition of key terms• Reports in which the data will appear• Graphic presentation that will eventually be used to display the data• Other rationale for the measure

Page 62: PB.ANUT-E

53

GETTING CLEAr ON PErfOrMANCE MEASurEMENT dEfINITIONS 8

Two examples of outcome measures using a target approach follows: Performance Measure Definition

Program: Street Sweeping

Program Manager: Jim Smith

Program Goal: Continually meet or exceed the desired cleanliness level expected by our residents .

Type of Measure: Outcome (target approach)

Performance Measure: End Outcome – clean streets .Intermediate Outcome - 85% “acceptable” or better ratings for citizen survey and trained observations .

Baseline: 74% “acceptable” or better ratings from the first citizen survey . 80% “acceptable” or better ratings from observations of trained observers .

data requirements: Survey Results

frequency and Source of data: Annual citizen survey administered quarterly in specified council districts, annual trained observer survey administered quarterly in specific council districts .

Calculation Method: Six levels of satisfaction – terrible, poor, fair, acceptable, good, great .

definition of Terms: Resident = all people who live or own businesses within our geographical boundaries .Satisfaction = the six levels of opinion . Cleanliness = free from dirt, litter, and debris .Streets = those roads within our geographical boundaries that are our responsibility .

reports in which the data will appear: monthly reports to the administration and council, quarterly public newsletter, annual report .

Graphics: Five to ten year trend line

rationale: Our city prides itself as being a clean, comfortable, and safe place to live . Cleanliness of our streets is a tangible example of this in the community . The only way to really know how our residents feel in this area is to ask .

Page 63: PB.ANUT-E

GETTING CLEAr ON PErfOrMANCE MEASurEMENT dEfINITIONS

54

8

Another example for equipment maintenance follows:

Performance Measure Definition

Program: Equipment Maintenance

Program Manager: Jim Smith

Program Goal: All rolling stock is available for use when needed .

Type of Measure: Outcome (target approach)

Performance Measure: % of downtime (equipment not available when needed) .

Baseline: 90% or better overall (10% or less downtime) .

Performance Goals/Targets: 99% of equipment is available when needed . The time period is generally 8:00 a .m . to 5:00 p .m ., Monday – Friday, but also includes emergency use, i .e . snow plowing or emergency sewer or street repair .

data requirements: Current equipment records .

frequency and Source of data: The equipment maintenance information system is available on-line and updated daily .

Calculation Method: % of downtime .

definition of Terms: Downtime = equipment not available for work use due to need for maintenance or repair .Rolling Stock = refers to self-propelled motorized equipment such as trucks, front-end loaders, autos, etc .

reports in which the data will appear: monthly reports to the administration and council, quarterly public newsletter, annual report .

Graphics: Five to ten year trend line .

rationale: If equipment isn’t available operators will be idle and critical public services may be postponed and service / efficiency will be decrease .

Page 64: PB.ANUT-E

55

GETTING CLEAr ON PErfOrMANCE MEASurEMENT dEfINITIONS 8

Performance measurement definition options

Appendix E contains definition options for the following performance measurement teams .

• Accountability• Aggregation / disaggregation• Baseline Data• Benchmark• Benchmarking• Best-in-Class• Continuous Improvement• Customer• Effectiveness / Outcome Measures• Efficiency Measures (cost effectiveness)• Explanatory Information• Indicators• Input Measures• Measures of Accomplishments• Measures of Efforts• Measures that Relate Efforts to Accomplishment• Metrics• Outcomes• Output Measures (workload)• Performance Auditing• Performance Communication• Performance Goal• Performance Management• Performance Measurement• Performance Measures• Process Owner• Productivity• Program• Quality• Strategic Direction• Strategic Goal• Strategic Objective• Strategic Planning• Target• Timeliness• Unit Cost• World Class

Page 65: PB.ANUT-E

fOCuSING ON OuTCOMES

56

9

“If I had eight hours to chop down a tree, I would spend six of them sharpening the axe.” — Abraham Lincoln

don’t expect everyone to get it right away

Sometimes people have trouble measuring, because they are unclear about what they want to accomplish by doing so . Some people simply don’t know what to measure and how to measure it . Nonetheless, managers keep asking for better measures so they can decide which programs to fund and support . All of us are being asked to prove that our programs are worth the investment .

Don’t expect everyone to understand what outcomes are .

“Not everyone will welcome outcome measures. People will have trouble developing them. Public employees generally don’t focus on the outcomes of their work. For one thing, they’ve been conditioned to think about process; for another, measures aren’t always easy to develop. Consequently, they tend to measure their work volume, not their results. If they are working hard, they believe they are doing all they can. Public organizations will need several years … to develop useful outcome measures and outcome reporting.”

— Report of the National Performance Review, 1993

Some things really can’t be measured!

There are some who say that the really important things in life cannot be measured, made tangible, quantified, packaged, boxed, or tied down . Quantifiable data can be collected about the experience, but that is different from the experience itself . Calipers cannot be put on a dream, on happiness, on excitement, or on motivation . The essence is being in the experience, not standing outside measuring it . People tend to confuse the measurement with the experience . Measurements give useful, widely agreed-upon indicators of progress toward what is important . The measure is not what is important. The progress is . Measuring is not done for the sake of measurement, it is done because people want indicators of their progress toward something unmeasurable and important .

For example, if the following results were reported, how would you respond? “We overlaid 25 miles last quarter, 25 percent more than last year, and we are really motivated and happy in our work .” As the leader, I might feel terrific about that . We all understand “25 miles” and “quarter” and “25 percent” in the same way . We know what they mean; we share the same understanding of these numbers . But “motivated” and “happy” and “terrific” are a different matter . Though we all value motivation and happiness and “terrificness,” what is meant individually by these words can be quite different . And if we are out promoting work motivation and happiness and terrificness across the agency, these feelings will be much harder to communicate than last quarter’s paving figures .

Our structures and systems do much better with quantities than with qualities . We know how to keep track of numbers and are inclined to reinforce that ability rather than to figure out how to support what is immeasurable and important .

Page 66: PB.ANUT-E

57

fOCuSING ON OuTCOMES 9

Another example is a set of values that are part of the mission, vision and values of the agency . Maybe one of your values is “respect .” The agency needs to show that it respects employees, customers, the council, etc ., in a variety of ways . Saying it in a couple of speeches and hanging it on the walls of our buildings just won’t cut it . The agency can express this value with structures, systems, policies, practices, and mechanisms . It should demonstrate that kind of support . But success relies on each person internalizing that feeling of being respected . Everyone in the agency needs to have a common understanding of what it means, how to act and not act to honor it, and see others acting in ways that are respectful .

Categories of information

There are three main categories of information that need to be measured for all public works agencies:

• Financial considerations• Customer satisfaction• Results

Depending upon which approach(es), model(s), or initiative(s) your agency has chosen, the structure for gathering performance measurement information will vary . All of the approaches begin with the “big picture” and eventually get down to specific performance measures for each “program or service .” • Big Picture (Agency mission, vision, values, goals; strategic direction)• Agency Goals – overarching, ongoing goals• Agency Objectives – what will be accomplished during the fiscal year• Agency Performance Measures• Division Mission• Division Goals• Division Objectives• Division Performance Measures• Program Mission• Program Goals• Program Objectives• Program Performance Measures

Taking it to the next level – the “So what?”

Measurement, when done well, helps people make better decisions about where to direct resources, what programs to fund, and if they should spend more (or less) in current programs . You spend all this time and energy to end up with data, actual results, trends, and indicators . Then it is time to take it to the next level, to answer the “so what?” question .

Let’s return to our paving program example where we overlaid 25 miles last quarter, 25 percent more than last year . So what? Big deal! What difference does this information make? What needs to change to improve things? Were there factors or influences that had an impact on the results that we had no control over? Is 25 miles great or poor? Was the weather the reason so much was accomplished? How much overtime did we have to work? What’s not going to get done since we spent more than budgeted?

Page 67: PB.ANUT-E

fOCuSING ON OuTCOMES

58

9

Are we raising expectations for next year? How does this impact the replacement cycle? The important questions need to get answered . When you answer them, the “So what?” is described and everyone can understand it .

Using the overlay program described above as an example, the “So what?” answers might look like this .

The “So what?” for our Overlay Program

Overlay Program Outcome: Overlay all street surfaces every 13 years .Planned overlay for 2000: 20 miles .Actual Results: 25 miles overlaid in 2000 a 25% increase over 1999 .

Trend:

Overlay Program - Miles Completed

30 25 20 15 10 5 0 90 91 92 93 94 95 96 97 98 99 00

Satisfaction Level: Annual citizen survey: Baseline is 70% acceptable or better . Target is 85% .Trained observer ratings: Baseline is 75% acceptable or better . Target is 90% .

Explanatory Factors:

The weather was a major factor in the success of the 2000 overlay program . The normal average temperature (past 10 years) during the paving season is 72 degrees . The average during 2000 was 81 degrees . The normal average rainfall is 6 inches . In 2000 we had 1 .5 inches . The conditions were prime for this program .

The replacement cycle will be slightly advanced . This one time surge will allow other road sections needing overlay to move up the priority list . Increases in overlay also improve the use of our annual maintenance budget because we spend less time filling potholes and doing crack sealing .

Additional revenues were available from the FEMA reimbursement for flooding in the fall of 1999 . This one time revenue of $x,xxx,xxx was applied to materials, supplies, and overtime as approved via a supplemental appropriation by council in April, 2000 . Our staff worked x,xxx hours of additional overtime to accomplish the higher level of work at cost of $xxx,xxx .

The consequences of this action are neutral financially, and the people living in the Citrus Heights, Parkview, and Mountainview neighborhoods are very happy – they were scheduled for overlay two to three years out .

Page 68: PB.ANUT-E

59

fOCuSING ON OuTCOMES 9

Types of outcomes

There are three main types of outcomes: targets, benchmarks, and change statements . You need to identify the most appropriate outcome type .

TargetsTargets are specific levels of achievement . For example: a 90 percent rating on a satisfaction survey .

BenchmarksBenchmarks are comparative targets . Comparisons generally relate to other time periods or other agencies . For example, a higher satisfaction rating than other public works agencies in the region, or a 50 percent increase over the 1995 satisfaction rating .

Change StatementsChange statements reflect an increase or decrease in behavior or attitude . For example, an increase in the satisfaction level of our citizens regarding street cleanliness .

Consider the following questions when determining the type of outcome .

• What kinds of data are available?• What is the time frame for measurement (short term or long term)?• Is baseline data available for making comparisons?• Can the change actually be measured?

What are the consequences?

Outcomes are the consequences of doing or not doing the job . The consequences can affect customers, employees, and the agency . Some general questions to consider include:

• Satisfaction, confidence, feelings – How confident and satisfied are people with the results?• Accomplishment – Was the goal achieved?• Aftermath – Was there any unforeseen fallout?• Compliance – What regulations were met or not met?• Cost – Were direct and indirect costs incurred or avoided? • Image – How did goal achievement affect the status of the job, agency, or person?

Page 69: PB.ANUT-E

EVALuATING PErfOrMANCE MEASurES

60

10

“One doesn’t discover new lands without consenting to lose sight of the shore for a very long time.” — Author Unknown

Criteria for a good set of performance measures

Indicators in a properly developed set of performance measures should satisfy the following criteria established by David N . Ammons:

• Valid • Reliable • Understandable • Timely • Resistant to perverse behavior • Comprehensive • Non-redundant • Sensitive to data collection cost • Focused on controllable facets of performance

Characteristics performance measures should possess

Performance measures should meet the characteristics of relevance, understandability, comparability, timeliness, consistency, and reliability .3

RelevancePerformance measures should include data essential to provide a basis for understanding the accomplishment of goals and objectives of the agency that have potentially significant decision-making or accountability implications . Because the purpose of government is to establish and enforce laws, regulate activities, and provide services economically, effectively, and efficiently – not to earn profits – no single measure of performance is readily available to assist citizens in assessing accountability and in making economic, political, and social decisions . A broad variety of performance measures are required to meet the diverse needs of different users of services, to report on the many goals and objectives of the programs and services provided, and address the issues being considered for different decisions and levels of accountability .

UnderstandabilityPerformance measures should be communicated in a readily understandable manner . They should communicate the performance of the agency, department, program, or service to any reasonably informed interested party . The use of tables, graphs and charts often help information to be understandable .

Performance information should be concise yet fully disclose everything that needs to be shared . Both conciseness and comprehensiveness in reporting performance measures are important because of the number, diversity, and complexity of programs and services . A balance should be achieved among the number of services reported, the performance measures reported, and the capability of people to understand and act on information .

Page 70: PB.ANUT-E

61

EVALuATING PErfOrMANCE MEASurES 10

Performance information should include explanations about important underlying factors and existing conditions that may have affected performance . Explanatory information should be reported with the measures of performance both for factors over which the agency does and does not have control .

Performance information may be accompanied by a description of the way in which the performance measures should be used . This could include comments on the need to consider performance measures in conjunction with explanatory information, the need to consider multiple aspects of performance when assessing results, instances where proxy or surrogate measures are being reported because of an inability to measure an outcome of a service, and the difficulty of using performance information to assess policy accountability .

ComparabilityPerformance information should provide a clear frame of reference for assessing the performance of the agency and its programs and services . Performance measures, when presented alone, do not provide a basis for assessing or understanding levels of performance . Therefore, performance information should include comparative information . This information may take various forms: for example; comparisons with several earlier years, targets established by the agency, externally established norms or standards of performance, or other comparable agencies .

TimelinessPerformance information should be reported in a timely manner so it is available before it loses its value in assessing accountability and decision-making .

ConsistencyPerformance information should be reported consistently to allow readers to compare performance over time and gain an understanding of the measures used and their meaning . However, performance measures also need to be reviewed regularly and modified or replaced as needed to reflect changing circumstances .

ReliabilityFor performance information to be of value, it is essential that it be reliable . To be reliable, information must be verifiable and free of bias, and should faithfully represent what it purports to represent . Therefore, performance information should be derived from systems that produce controlled, verifiable data . The value of a strong internal control structure has long been recognized when dealing with financial information . The same controls are needed for non-financial information .

The realities of performance measurement

You can count on a number of realities as you design, develop, implement and improve your performance measurement system . Here are a few of which to be aware:

Acceptance is essentialAcceptance of the measurement process is essential to its success as a performance improvement tool . The process by which you determine what to measure, how to measure, and how to utilize the measures is more important than the actual results themselves .

Audience - for whom are you doing all this?The audience/user and purpose must be clearly defined . Who are the customers and end-users for the performance measurement system? What are their requirements? What do they feel they need from measurement to help them do a better job managing, problem solving, and decision-making?

Page 71: PB.ANUT-E

EVALuATING PErfOrMANCE MEASurES

62

10

Participation - the more the betterThe greater the participation in the process of creating a performance measurement system, the greater the resulting performance change, and the greater the ease of implementation of future changes based upon the results . This participation includes employees, management, administration, council and the citizens . Measures must be seen to have value well beyond the program performance level . Performance measurement and reporting becomes not only an accountability tool, but also an advocacy tool .

It’s difficult, hard and complexPerformance measurement is hard and complex . Once we accept this, measurement can become less difficult . A complete and effective system of performance measurement will require several years of consistent, incremental work to achieve . One of the reasons performance measurement is difficult is that these measures were not measured before; and the resulting uncertainty dampens enthusiasm substantially for some individuals and work groups . It is complicated by the fact that there is no generally accepted “bottom line” in civic government because there is no scientific or analytical measurement that indicates the relative benefit to society of, for example, less toxic waste or better public transit .

Behavior is impactedMeasurement of any kind will affect the behavior of individuals within your agency - for better or for worse . It has nearly universal capacity to focus attention . Management needs to recognize its obligation to monitor and direct the resulting changes in focus . Reporting performance measures will also affect the behavior of the administration, council and the citizens .

The right measures, the wrong targetIt is NOT “the right measures” that you are shooting for . Instead, it is a process and culture for choosing, using and revising measures to assist employees in focusing on achieving improvement over the long run .

Tells about the pastMeasurement at its best only tells you something about the history of your performance .

Employees’ attitudes are impactedNo matter how well an employee’s work is planned, managed and measured, the outcome will depend much more on how passionate the employee is concerning the work .

Poor management of the systemProblems related to an agency’s outcomes are much more often related to poor management of the systems than to poor performance .

Watch out for the trivialIt is easy to measure the trivial . It is much more difficult to measure what is truly important and do it in an objective way .

Strategy alternatives

There are many different strategies that you can employ as you proceed with your performance measurement system, as follows:

Page 72: PB.ANUT-E

63

EVALuATING PErfOrMANCE MEASurES 10

Focus on service delivery improvementDo not make a commitment to measurement, benchmarking or any other process or program, nor to the reporting of heroic results, nor to assigning the blame . Instead, make a commitment to service delivery improvement, adding new activities and deleting old activities .

Improve those things that make a differenceAim to improve the things that make a difference (those with large costs, large customer value, substantial consequences, etc .) .

Measure what employees can translate into direct actionMeasuring global hunger is interesting and of monumental importance, but few can apply any direct action in their daily work . For performance measurement to be motivational, those they apply to must be able to see what to do . There must be a “line of sight” between the actions employees can take and what shows on the measure . Being held accountable for measures with no clear means to affect them is negative at any level .

Measure what’s important strategicallyMeasure what is of value to citizens and will move you toward your vision, not just what is easy to measure or already being measured .

Involve each work groupHave the work group or program team that produces the result develop the measures, perform the measurements, and report the results .

Good performance measures need to be:

• directional - to confirm that you are on track to reach the goals, • quantitative - to show what has been achieved and how much more is to be done, and

• worthwhile - adding more value than they cost to collect and use .

Limitations of performance information

Unfortunately, many public employees, elected officials, and media people believe that regularly collected outcome information tells whether the program and its staff were the primary causes of the outcomes . But outcome information provides only a score . The information tells whether one is “winning or losing” and to what extent, but it does not indicate why .

What caused a change in the outcome? Many factors could have been the cause . Usually, only if the program has considerable evidence, such as that from an in-depth program evaluation, can clear linkages be obtained between the program and the extent of an outcome . Performance measurement helps focus ad hoc program evaluations on the issues or questions, if this type of in-depth analysis would be warranted and cost-justified .

A clearer understanding of the limitations of outcome data can reduce the tendency to blame public employees immediately when performance measures show unfavorable outcomes . A reduction in blaming, in turn, makes the process less threatening to managers and staff and should encourage greater use of outcome information for improving services .

Page 73: PB.ANUT-E

EVALuATING PErfOrMANCE MEASurES

64

10

We need to explain the nature of outcome data internally, to elected officials, and particularly to the media, so that our agencies are not blamed prematurely and unfairly for negative outcomes .

Readers of performance information need to be aware of the limitations so that the information can be used appropriately . The types of limitations include:

One measure won’t workGenerally, a single composite measure cannot adequately communicate the results of providing a service or group of services . It is necessary for readers to use several measures to assess the performance of an agency or program .

Measures are not enoughPerformance information does not, by itself, explain why performance is at the level reported, how to improve performance, or the degree to which a service (or other factors) contributed to the outcome reported . Readers may require additional information beyond what can be provided in the report to fully understand the relationship between an outcome and the many factors affecting that outcome . Performance measurement results only report that a condition exists, and does not explain the cause or causes of a trend . Further study may be necessary – if the cost justifies it .

Relevancy is hard to pegIt may be difficult to determine whether the reported performance measures are the most relevant measures of the achievement of a goal or objective .

Proxies complicate mattersFor some services, it may not be possible to measure the most important outcome, so the performance measure reported may be a proxy or surrogate measure that is in some way related to the desired outcome . These measures are more difficult to understand .

The right goals and objectivesPerformance information provides data about the achievement of goals and objectives, but does not provide the information needed to assess whether the goals or objectives are the most appropriate ones, and the ones that most clearly reflect the values of the community .

Policy accountability is a whole different gamePerformance information does not provide all the data needed to assist in assessing policy accountability . This assessment involves determining the relative value of a service to society or specific groups, or the comparative value of two or more distinct services .

Watch the costThe cost of gathering data for some performance measures can exceed the value received . In some cases, an acceptable level of imprecision may be needed .

We have limitsOur agencies have limits on influencing community outcomes . Pointing out these situations can be important .

Overcoming the limitations of performance information

Many of the limitations of performance measurement can be overcome, and the usefulness of the

Page 74: PB.ANUT-E

65

EVALuATING PErfOrMANCE MEASurES 10

information enhanced . When done well, performance measurement provides essential information to assist in assessing accountability and making decisions . Here are a few of the ways that these limitations can be mitigated:

Clear communicationThe uses of performance information and the potential difficulties associated with its use should be clearly communicated . There is an important educational component in communicating this information, which can include a description of how performance information is a necessary part of the data needed to measure the performance of your agency and the programs and services you provide . Descriptions also could explain that this information is intended to assist the readers in determining whether goals and objectives are being achieved in terms of efforts, outputs, and outcomes and the efficiency of operations .

Explain the “why?”The descriptive information also could be used to comment that performance measures should be considered in conjunction with explanatory information to understand why performance is at the reported level, and the degree to which the items reported in the explanatory information may have affected the reported results . When proxy or surrogate measures are used, an explanation of what would be considered an ideal measure and why it was not used, combined with an explanation of how to interpret the proxy or surrogate measure, is helpful .

Highlight “other” factorsEven with comparative information, there often is not a clear cause-and-effect relationship between the service provided and the resulting outcome . There may be numerous explanatory factors, completely or partially beyond the control of the agency, that have a significant effect on the results . These factors, when identified, should be reported as explanatory information with the performance measures, together with an explanation of their possible effect .

Going beyond the resultsPerformance information, even with comparisons and explanatory factors, may not provide sufficient information about why a program or service is performing at the reported level . Therefore, additional information gathered through program evaluations, performance audits, or other means may help readers understand the reasons for a given level of performance .

Be concise, yet completePerformance information should include only those measures that are essential to provide a basis for assessing accomplishments or that have potentially significant decision-making or accountability implications . However, it is important to balance this need for conciseness with concern for completeness . If the reported performance information is less than comprehensive, the reader may be left with gaps in understanding about the results, or with an incomplete picture of the results . Comprehensive reporting of performance information also helps to prevent selective reporting of only those measures that provide positive results . At the same time, too many measures can confuse and overwhelm readers .

Willingness and openness to change performance measuresWe shouldn’t expect our performance measures to be eternal . Times change, technology changes, and we just plain may have missed the mark with our first, second, or fiftieth attempt . Be willing to change or modify the measures .

Page 75: PB.ANUT-E

GAThErING dATA

66

11

“Each problem that I solved became a rule which served afterwards to solve other problems.” — Rene Descartes

Keeping a handle on data collection

Data collection is a key tool in the performance measurement process . Some of the benefits of data collection include:

• Allowing everyone to work with the facts .• Enabling the agency to establish baseline measurement criteria .• Providing information to measure the success of implemented improvements .• Helping identify or confirm a problem that exists .

Data are gathered and analyzed for each performance measure to determine if – and how well – goals and objectives are being met . It is easy for the data collection and analysis process to get out of hand . It is tempting to take advantage of the many data resources available via the Internet, Intranet, and financial system which, while valuable, should not be undertaken solely for the sake of research . Keep in mind that data are collected and analyzed to get answers . The following guidelines will assist in the data gathering process:

Keep it focusedEnsure that the right data, and only the right data, are collected . Avoid repetitious or unnecessary compilations . Make sure that the questions originally posed by the performance measures are answered .

Keep it flexibleData should be collected from a variety of sources and through a variety of media . Any one system isn’t necessarily right or wrong . Although using automation is preferable, sometimes manual systems are necessary and even cost-effective .

Keep it meaningfulA few basic, well-aligned measures are better than a number of complex ones .

Keep it consistentData collection should be based upon a set of agreed-upon definitions . These definitions need to be universally understood by employees, supervisors, managers, administrators, elected officials, and citizens . Data collected within a framework of understanding can be easily compared and analyzed, allowing subsequent evaluations to be “apples to apples .”

different levels of information for different needs

Each agency, department, division, and program have different needs for the data gathered . These differences need to be reflected in the data collection process .

Operational differencesThe data focus for line supervisors and employees relate to daily operations and customer service .

Page 76: PB.ANUT-E

67

GAThErING dATA 11

Operational performance data is useful to these groups . These data are often best gathered as part of the employee’s interaction with the customer, e .g ., number served, time to process requests, etc .

Tactical differencesThe data focus for managers and program managers relate to customer satisfaction (or dissatisfaction) . These data are usually collected through customer surveys . Another kind of data that managers and program managers are interested in is program cost . These data come from financial systems . This data is used to react to conditions, and also to implement proactive measures to reduce unnecessary costs . Additional performance information that is useful relates to employee morale, monitoring safety, and identifying skill deficiencies which can be obtained through surveys .

Strategic differencesSenior managers need to determine whether their departments or divisions are meeting or exceeding the expectations defined in their strategic plans . Generally, they target a few vital measures as critical to their responsibilities . Rather than immersing themselves in day-to-day program details, senior managers look for trends .

These different focuses result in different uses of the data, different time frames for using it, and differing opportunities for improvement . The frequency that performance measures are collected and analyzed influences your ability to make needed improvements .

Transforming data into information

Data analysis in performance measurement is the process of converting raw data into performance information and knowledge . The data collected are processed and synthesized so that informed assumptions and generalizations can be made about what happened . This is then compared to what was expected to happen; if there is a difference, why; and what corrective action might be needed .

To ensure that everyone can use and understand data and its analysis, staff must understand some statistical methods such as:

• Trend analysis: comparing data from one period to another and noting the percentage change .• Plotting data: upper and lower limits are set to determine how many events fall outside of the norm .• Aggregating and disaggregating data: showing many layers of data combined into one (aggregated)

or broken out into meaningful parts (disaggregated) .

The disaggregation of data is one area that can really help turn data into information . It helps in communicating by providing information at a more meaningful level and by separating variations in performance that may be hidden by aggregated information .

For example, a man has drowned in a lake whose overall average depth is 12 inches . The overall average depth of the lake, while accurate, hides the fact that sections of the lake are much deeper . The lack of such breakouts has been a major defect in most performance measurement systems at all levels of government throughout the country .

A service may be working well for some types of clients and not for others . It may be working well in some field offices, districts, or facilities, but not well in others . It may be working well for a less difficult workload, but not for a more difficult workload .

Page 77: PB.ANUT-E

GAThErING dATA

68

11

Breaking out the aggregate data for outcome and service quality indicators will be much more useful to program staff in assessing where the service has been successful and where it has failed . It will help staff direct their attention to where improvements are most needed .

New York City’s department of sanitation has for many years allocated its sanitation crew effort to those locations that had poor cleanliness ratings, based on the city’s regular trained-observer rating procedure for street cleanliness . Inspectors rate the city streets from 1 to 5, according to a photographic rating scale . As you can see from this example, one of the most important uses for performance information is to help programs allocate their scarce resources to those areas that most need attention .

You should try to disaggregate whenever you can . However, there is no standard way to disaggregate . You need to decide what best communicates the results .

Most commonly, performance measures are broken down by geographical areas (e .g ., streets, neighborhoods, etc .); organizational units (e .g ., different facilities within a sanitation department, etc .); and by degree of difficulty of the incoming workload (e .g ., mortality rates for different age and risk groups) . Other levels of disaggregation include by the size of the jurisdiction, or by type of a service .

There are also many automated tools available to perform statistical analysis of the data including charting, graphing, trending, forecasting, etc . Advanced statistical programs are available and staff with spreadsheet knowledge can develop multiple types of analytical models .

Sources of data

Data can come from many different sources; therefore, it is important to limit yourself to the data that already exist in your file cabinets, bookshelves, and computer systems, if at all possible . A great deal of time and expense can go into developing new sources of data . If really necessary and justified, it may be appropriate to develop a new source of data . Generally, you want to work with the data you already have – with the exception of survey information .

factors to consider when collecting data

In order to ensure the collection of accurate, reliable, and timely data, the agency should develop a data collection strategy . Failure to do this may result in vulnerability to questionable data and constant rework of the data as you react to criticisms or requests for credible and complete information . Consider these factors in your strategy:

Define the dataThe data definitions must be developed, identifying the attributes to be included as well as those to be excluded from the reporting system .

Document the processAt a minimum, an outline should be developed that identifies how the data will be collected . This should be developed for each performance measure given that the process may be different for each .

Document data sourcesWhether data is obtained from manual logs, check sheets, computer databases, surveys, or focus groups, it is critical to maintain a record of the source . This step is critical because the person collecting the data this year may not be the same person next year . In addition, it is dangerous to rely on memory for tasks

Page 78: PB.ANUT-E

69

GAThErING dATA 11

not done daily, weekly or even monthly . Not documenting data sources makes the process vulnerable to inconsistency and inaccuracy in reporting data .

Data manipulationThe design should explain how data has been manipulated to create the reported results . The term “manipulation” used here does not imply the data are being doctored to present a skewed picture . Rather, the term refers to the calculation to determine a numerical relationship .

Explanatory factorsAny contextual or procedural information should be recorded as an explanatory factor . Where assumptions are made, there is always the possibility of misinterpretation of the data . Assumptions should also be included in the explanatory factors . And any factors beyond the agency’s control that influence program outcomes also should be identified and explained .

The data collection process

The process for collecting the data needs to include the following considerations:

Define the data to be collectedBrainstorm the types of data to collect for each program . Be sure to involve all those who will be required to provide data or otherwise assist in the effort .

Consider issues related to sample size and frequency of data collectionDetermine whether or not sampling is appropriate . Sometimes it is impractical from both a time and expense perspective to collect data on every process or from all members of a population . It often is necessary to rely on sampling to infer characteristics of the whole by examining selected portions .

Design a data collection sheetThe data collection sheet should capture who will collect the data, how often the data will be collected, how the data will be recorded, and the process that will be used (manual or automated) .

Test the collection method and refine the stepsAlways do this step, no matter how simple you think the data collection is . It will save you rework in the long run .

Work on summarization during data collectionSummarizing data as you go enables you to spot trends early, which is better than waiting for the deadline to begin tallying . Trying to analyze data, especially significant amounts, under the pressure of a deadline, often leads to careless mistakes . Summarizing data as you go along also will avoid unpleasant surprises at the end .

Generate and collect only the data to be analyzedDo not waste time collecting data that you do not plan to use . A caution: you don’t always know if the data will be fully used . It is better to err, initially, on having too much . Going back to gather data can be painful and time-consuming .

Page 79: PB.ANUT-E

GAThErING dATA

70

11

Monitoring targets

Each target should be monitored on a continuous basis . Monitoring provides you with the results needed to decide whether or not the target is accomplished . Monitoring gives the program manager an opportunity to keep tabs on the operation of the program, and take corrective action if requested during the program . For instance, if the target calls for 95 percent of citizens to be satisfied with the state of cleanliness and general maintenance of roads by the end of the year, it does not mean that results are going to be checked once every year . Lesser-scale customer surveys can be completed on a quarterly or monthly basis . They will help to see whether everything is going as planned, and check whether there is any seasonal (cyclical) pattern in customer satisfaction . Monitoring will vary depending on the service and target of accomplishment . For most important services (programs), monthly data collection and reporting system should reflect all identified program measures .

using trained observers

Another way of collecting data is through the process of observation . For example, trained observers, using a set of pictures as a guide, can rate the cleanliness of streets .

The concept of trained observers involves training people to rate conditions of tangible structures – streets, sidewalks, signs, buildings etc . This is done by using scales that have been carefully chosen as meaningful to the public as well as to you and by training those observers in such a way that their results could be duplicated . Rater reliability is important in that you want consistency . The training of observers and the selection of scale are key . To help the training process and to make the scale understandable, most of the scales hinge on photographs that illustrate the different levels of the scale in addition to written descriptions .

The trained observer ratings are used as input to performance measures . If you are measuring street cleanliness, trained observer ratings on litter is one way to get reliable information .

Trained observers also are very useful in dealing with citizen equity issues . Citizens often are concerned that the streets and sidewalks in their area are poor in comparison to those in the rest of the city . A systematic set of measures of the condition of the streets and sidewalks can show whether there is equity across areas or whether the citizen is right and their area is worse than others .

Page 80: PB.ANUT-E

71

dEVELOPING PErfOrMANCE INfOrMATION ThAT hAS rEAL VALuE 12

“Never mistake motion for action.” — Ernest Hemingway

reporting and using performance information

Performance information needs to be disseminated quickly . The value of the information can decline and even be useless if not made available when decisions are being made . The methods used to disseminate the performance information include hard copy reports, newsletters, scorecards, e-mail, meetings, videoconferencing, use of local area networks, etc .

Who should receive the performance information and when?

It is important to develop a schedule for disseminating performance information . Generally, the following guidelines will work:

• Performance information that is for the whole agency should be disseminated to everyone within the agency at the same time .

• Performance information that applies to a work group or individual should be disseminated to the manager, program manager, work group or individual simultaneously .

• Performance information that is being shared with administrators, elected officials and/or the public should be reviewed for accuracy before being disseminated .

how can the performance information be used?

Performance information can be used for many purposes including:

Making resource allocation decisionsThere are important linkages among resource allocation (budgeting), strategic planning, and performance measurement . If there is alignment among the strategic plan, goals, objectives, and performance measures, performance information can be used as one of the factors when allocating resources .

Conducting employee performance appraisals“Pay for performance” systems are being implemented in some public works agencies . When performance information contributes to these pay systems, it is imperative that clear, consistent performance measures be developed, that these measures be communicated to employees, and that they receive regular feedback on their performance . See Chapter 13 for more information .

Improving processesIn many cases, if performance targets are not met, corrective action or process improvements are undertaken . On the other hand, if performance targets are exceeded they may be reset to establish ‘stretch’ goals . Caution should be taken to not continually increase the target without the involvement of those employees impacted . There needs to be a balance between continuing to improve versus not allowing employees to exceed the expectations .

Page 81: PB.ANUT-E

dEVELOPING PErfOrMANCE INfOrMATION ThAT hAS rEAL VALuE

72

12

Documenting accomplishmentsActual results and accomplishments can be shared with many audiences, both internal and external to the agency . This is an opportunity to share this information, to educate, and to engage in a dialogue about how to improve .

Performance information presentation and reporting

Presentation of the data should be concise, easy to understand, and tailored to the needs of the audience . Citizens, and other external audiences, require summary presentations of the data . An operational manager needs more details and supporting contextual information . A senior manager requires data presentations and displays that focus on bottom-line performance results, allowing him or her to quickly digest information, to focus on problem areas, and be more efficient in making program decisions .

The following general guidelines apply to all presentations:

• Text: This is generally the simplest way to present data . The data is listed in interpretable order . However, text does not lend itself to visualization of trends .

• Tabular: Tables prepared to present information should be clear, simple, and easy to read . Tables can record figures or percentages, and are a good tool for presentation of a range of information in a compact and interpretable form . Percentage tables should claim no more precision than your data warrants . In most situations you don’t need more than one or two decimal places (if any) .

• Graphical: There are many types of graphical presentations including bar graphs, line graphs, pie charts, pareto charts, run charts, control charts, etc . These visual displays can be very effective . Each type of graph can be more or less effective depending upon the situation . Experimentation graphical display software or advice from a graphic artist may need to be employed .

Both management and policy participants in the performance improvement process can utilize service and financial measures more powerfully if they are displayed clearly, i .e . graphically . Tabular compilations of data are useful, primarily for professionals who use that data daily . Other participants in the public policy process—elected executives, legislators, board members, business advisory groups, policy-level administrators, union members, interest groups, the general public and the media—are not always as adept at drawing conclusions from detailed tables as are the professionals who construct them . Most people, however, can draw conclusions when the data is displayed visually, e .g ., by the use of bar graphs and trend charts . By relating information about service to information about demography and geography, graphics facilitate the identification and analysis of present and projected problems/services .

Performance reporting summarizes all the indicators and compares actual results to the targets . Besides comparisons with targets, reports may include comparisons with:

• the previous period (and include a year-to-date roll-up);• similar jurisdictions;• technically developed standards or norms;• geographical areas of client groups within the same jurisdiction; and• public-sector/private-sector costs and results with similar organizations .

Information should be organized in such a way as to facilitate comparisons . Reporting and monitoring formats can coincide quite frequently .

Analysis and action are the logical conclusion of performance measurement . A well-developed system allows you to spot weaknesses and threats, as well as strengths and opportunities . Better knowledge of strengths and weaknesses will give the program manager (and others) an opportunity to diagnose program growth capabilities and take relevant actions .

Page 82: PB.ANUT-E

73

LINKING yOur PErfOrMANCE MEASurES TO BudGET PrOCESS 13

“I find that the harder I work, the more luck I seem to have.” — Thomas Jefferson

The challenges and difficulties of linking performance measures to the budget process, performance appraisals, and pay causes many agencies to struggle with these concepts . There is not agreement that any of the three linkages is a good idea or that it is appropriate . On the other hand, how do you really establish accountability if there is no direct impact on employees, supervisors and managers? You are now part of the debate . You need to decide how to address these challenges in your performance measurement program .

Linking to the budget

Linking your performance measurement system to the budgeting process is always a good idea . However, it is easier said than done .

Most budget systems are used to accomplish a number of purposes:

• allocate resources,• control spending,• establish service levels (based on resource allocation), and• communicate what is to be accomplished with the resources .

There are also several potential budget formats including:

• incremental budgeting,• line item budgeting,• performance budgeting,• program budgeting,• target-based budgeting, and• zero-based budgeting .

Many organizations are moving toward a budget system that monitors performance targets, measures productivity, and establishes accountability . The desire is to know whether each program is achieving the intended results at a reasonable cost . If you have implemented program budgeting and/or performance budgeting, the odds of success are much higher .

Before performance measures can be integrated into budgeting in a meaningful way, agreement must be reached on program mission, goals and objectives . With these established, appropriate measures can be selected that will adequately reflect whether progress is being made .

Some agencies are linking performance to budget allocations by awarding larger allocations to managers who have achieved or surpassed performance targets . The theory behind this approach is that other departments will try to emulate the budgetary success of the better managers – and that overall productivity will improve . For example, the City of Sunnyvale measures performance to reward successful managers . If a program exceeds its objectives for quality and productivity, its manager can

PErfOrMANCE APPrAISALS ANd PAy

Page 83: PB.ANUT-E

LINKING yOur PErfOrMANCE MEASurES TO BudGET PrOCESS

74

13

receive a bonus of up to 10 percent . Another approach might be to increase the manager’s budget so he/she can do “research” or “experiment” with innovative improvements .

On the flip side, in some agencies, elected officials are also trying to reduce funding for those who fail to meet targets . Although such links are a potential incentive for improvements, they raise complex management issues and should be handled with caution . When funding reductions are used to punish poor performance, the result may be a reduction in productivity . It may also encourage managers to lower their targets so they can meet them . Not achieving a performance target may signal the need for reduced funding, but it may also signal the need for more funds .

There is an added challenge . The relation of cost to the amount of expected workload is relatively clear for many services . However, the relation between dollars (or number of employees) and service outcomes often is unknown . For example, how many additional employees and dollars would enable your agency to increase the customer satisfaction level by five percentage points? In most cases, no one really knows . However, program managers should articulate strategies to bridge the gap between desired and baseline performance, and should use measurement to help test the effectiveness of these strategies . Some other measures of service quality, such as response times to calls for service, are more closely related to amount of staff available . When service information is obtained for a number of years, more of these causal relationships will become better known .

Despite this limitation, it is appropriate to discuss, formulate, and justify budgets at least partly on the basis of service outcomes and quality . In preparing and justifying their budgets, program managers should discuss at least qualitatively what they believe the outcome and quality implications will be .

A link between budgeting and performance that is both respectful of each program and responsive to the needs requires:

• Frequent (monthly or quarterly) performance reports• A careful assessment of barriers to meeting the targets• Thoughtful analysis of the implications of the report for budget allocations

A performance budget creates a contract between elected officials and the program managers and their teams (so much service of such a quality for so many dollars) .

What affects people’s performance? The activities and duties that make up a job also shape performance . People’s performance is affected by:

• how well the agency has defined their roles, responsibilities, and relationships;• how well the agency has designed the rules, procedures, and processes associated with their job;• how efficient and appropriate the technology used in the job is;• how mature and functional the business relationships they must deal with are;• how clear, accurate, and timely the information they must work with is; and• how similar and reasonable their customers’ expectations and needs are .

These elements make up the characteristics of the job . Some jobs have well developed, thoroughly documented procedures; others do not . Some jobs put people in long established, mature relationships; others do not . Roles and relationships are well defined for some jobs but less so for others . A change in procedures, relationships, or any of the other elements disrupts the balance . Whether the change is

PErfOrMANCE APPrAISALS ANd PAy

Page 84: PB.ANUT-E

75

LINKING yOur PErfOrMANCE MEASurES TO BudGET PrOCESS 13

looked on positively or negatively, the agency and the individual have to adapt to restore the balance . If they do not adapt, performance suffers .8

Linking to performance appraisals and pay

Some agencies are trying to link pay to their performance measurement system . This is difficult to do well . The most common practice is for managers to ensure that performance goals are met by rating individual contributions to performance goals in individual performance appraisals .

Others are using incentives by giving employees credits toward training courses . Exceptional performance can also be acknowledged in newsletters and other publications, or with annual awards, plaques, and dinners .

A powerful message is sent to employees by judging individual performance by the achievement of the performance goals . It also establishes dynamics that may or may not be wanted .

In some cases, increased focus on individual employee performance produces decreased focus on responsibility to the work group and the agency . Intense focus on individual performance encourages competition at the expense of cooperation .

A popular multi-source performance appraisal, commonly known as 360-degree feedback, has a real stronghold in the private sector and is growing in the public sector . It is estimated that 90 percent of Fortune 1000 firms use some form of this type of system . In the majority of organizations, this feedback is used developmentally: ratings are collected anonymously and fed back to managers in the aggregate . Usually, only the manager being rated sees the feedback . The ratings are not included in the managers’ formal performance appraisal . Increasingly, however, management is asking, “How do we hold individuals accountable for making improvements if they are the only ones who see the data? If the individual needs development and chooses to ignore the feedback, we can’t remedy the situation.” This is a reasonable question, particularly when organizations are spending a good deal of time and money on the 360-degree feedback process .

There are valid reasons for limiting the use of 360-degree feedback to developmental purposes and separating it from the formal appraisal process . When individuals believe the ratings will be used for performance appraisals, they may alter their ratings . Game playing may occur – supervisors may try to get higher ratings by catering to subordinates, etc . In some cases, the idea of subordinate or peer ratings as part of one’s appraisal is so taboo that many individuals boycott the process and refuse to participate .

Many individuals involved in the 360-degree feedback implementation process strongly discourage its use for evaluation . This also holds true for all types of performance appraisal systems .

PErfOrMANCE APPrAISALS ANd PAy

Page 85: PB.ANUT-E

uSING PErfOrMANCE MEASurEMENT INfOrMATION TO MANAGE

76

14

“Dissatisfaction is the basis of progress. When we become satisfied, we become obsolete.” - J. Willard Marriott

To make the cost of implementing performance measurement worthwhile, public agencies need to be sure that the information obtained can help improve public services .

Can performance measurement really help me manage?

This is a good question . If you feel the answer is “no,” then performance measurement in your agency is probably a waste of time and money . The answer can be “yes” for the following reason .

To run their agencies successfully, public managers must have certain key items of information available . This applies to all levels of management and to all kinds of agencies to be effective . It is important that performance measurement be considered an inherent and indispensable part of the management process . Performance measurement contributes to the following:

• Better decision-making: it provides managers with information to perform their management control functions

• Accountability: it fosters responsibility on the part of managers • Service delivery: improvements in public service performance • Public participation: clear reporting of performance measures can stimulate the public to take a

greater interest in and provide more encouragement for government employees to provide quality services

• Improvement of civic dialogue: it helps to make public deliberations about service delivery more factual and specific

Asking the right questions and doing something with the results

Many supervisors, managers, directors and elected officials intuitively understand how to ask the right questions about the results produced by the agency’s programs . But as the performance measurement answers arrive in the form of hard numbers and softer survey responses, another set of questions arise:

• What should elected officials and public works leaders do with the information?• What changes should be made in programs, policy or management practices in response to the data?• How should budgets be modified?• Does measuring and reporting performance have any consequences at all?

If the end result is that we do nothing with the information, make no changes, keep the budget the same and there are no consequences, then why are we spending all this time and effort?

An excellent manager’s superior interpersonal and leadership skills have much greater potential to foster improvement than does performance measurement . However, the combination of excellent skills and good performance information can be extremely powerful and have a positive impact on the programs and services .

Page 86: PB.ANUT-E

77

SuGGESTIONS fOr IMPrOVING PErfOrMANCE INfOrMATION uSELfuLNESS

15

“The best way to get a good idea is to get a lot of ideas.” - Linus Pauling

Enhancing the usefulness of performance data is a true challenge and should be a continuing goal of the performance measurement program . These suggestions should help you focus on this aspect of the work .

Address service quality and outcomes explicitly when reviewing services and programs

These discussions among elected officials and the public works director should occur both during the annual budget process and during the year as program and policy issues arise . When elected and executive branch officials begin to discuss specific service outcomes and quality, then management throughout the agency will take performance measurement seriously .

Ask program managers to set a target for each performance measure and assess progress

These targets should be set annually based on past experience and based on the budgeted resources anticipated for the upcoming year . Targets also should be established for each reporting period during the year, e .g . quarterly .

Actual performance against the targets should be reported, and managers should review with their staff the program’s performance after each report has been issued . Upper-level managers and executives should also review and discuss the actual performance results . Their focus should be on needed improvements and be constructive .

Include indicators of both “intermediate” outcomes and “end” outcomes

Few outcomes that are truly end-oriented are fully under the control of the public manager . And many months, if not years, may elapse before the results show up . Intermediate outcomes, the ones most commonly reported by programs, directly relate to customers but do not indicate end results .

For example, environmental programs encouraging households or businesses to reduce hazardous wastes and to dispose of them properly should track such intermediate outcomes as the number of households and businesses that report changing their behavior in favorable ways . The water quality and the condition of living resources in the water should be measured as end outcomes, but these program effects likely will take longer to show up (after household and business behavioral changes .)

Page 87: PB.ANUT-E

SuGGESTIONS fOr IMPrOVING PErfOrMANCE INfOrMATION uSELfuLNESS

78

15

Include performance measurement in your training programs

Agency training programs should include information on the performance measurement process and on ways to use the information, such as for allocating resources, performance contracting, formulating and justifying budgets, helping with budget justifications, and making performance appraisals .

In the early stages of performance measurement one-on-one consultation helps managers develop useful measures that are feasible to collect .

Incorporate outcome performance requirements into contracts wherever feasible

In writing contracts, pay more attention to what is to be accomplished and less to how the contractors are to accomplish it . Requests for proposals and even, in some cases, requests for bids should specify the results expected in quantitative terms to the extent possible . To make such contract stipulations meaningful, the agency needs to monitor performance carefully against these requirements .

Local governments have included in contracts such measures as the number and timeliness of correction of complaints in solid waste collection contracts; number of road breakdowns and percent of scheduled times met in transit contracts; and percent of vehicles returned in automotive repair contracts .

Some of the above contracts included specific dollar rewards or penalties for not meeting or not exceeding targeted values . The existence of an ongoing performance measurement system should allow agency managers to monitor more easily and enforce the provisions of these contracts .9

Page 88: PB.ANUT-E

79

IS ThErE A BOTTOM LINE? 16

“Imagination is more important than knowledge.” - Albert Einstein

What is measured and reported gets attention!

The statement in this section’s heading (what is measured and reported gets attention) is true, especially if the information is used to formulate and justify budgets, for performance appraisals, and gets reported to the elected officials and media . Misuse and counterproductive use can occur if the performance measures are not chosen and implemented with care .

For example, solely focusing on the time it takes to provide a service is likely to cause employees to push for speed of service delivery at the expense of the quality of the service .

To reduce such problems, take these two steps: First, when selecting performance measures, explicitly consider potential negative side effects and include measures that track them . Response time measurements should be balanced by measures of the quality of the service, such as by surveying all, or a random sample, of the service’s customers .

Second, to protect against counterproductive behavior, periodically query staff as to whether they feel they are being pushed by the measures to undesirable behavior, such as rushing through a service at the expense of service quality . If so, revise the set of performance measures to provide a better balance between desirable and undesirable effects .

Measuring program outcomes and quality is not easy

At best, the development of a performance measurement system is a multi-year effort . For many of the services we provide, new data gathering techniques will be required, such as customer surveys and trained observers . All of these techniques require staff time, resources, and technology to make them happen .

To make the cost of implementing performance measurement on a continuing basis worthwhile, we must ensure that the information can help improve public services . Our managers are key users of thisinformation and, perhaps, play the most crucial role in making performance measurement successful . When managers reap real benefits, their commitment to the sometimes formidable task of capturing and reporting performance data increases . 6

Is there an answer?

It is dangerous to offer generalized solutions for a wide variety of agencies, situations, and needs . But performance measurement requires a certain level of boldness . There is one strategy that will surely work no matter which “big picture” approach you are using .

Page 89: PB.ANUT-E

IS ThErE A BOTTOM LINE?

80

16

Suggested Strategy

Develop outcome measures for each of your programs . The results will fit into any of the higher level approaches .

The ultimate performance measurement system is illusive . Many public works professionals, including you, are in search of some answers . You are in good company . Few, if any, public works agencies have fully developed outcomes for each program and have linked them to their business plans, work plans, and budgets .

A method by which work groups can create and continually modify performance measurement systems suited to their own inevitably special needs and circumstances needs to be created . No matter which approach you take, what won’t work is a standard set of measurements created by experts or obtained from a “shopping list” that are imposed on the agency . The shopping list approach may help you get started down the path, but it won’t get you to your final destination .

Performance measurement has no purpose if the data are not used to improve performance .

The bottom line is that sooner or later you are going to be required to measure performance . Are you going to be proactive and design a useful, good system or be reactive and play along with whatever is dictated? The choice is yours .

Page 90: PB.ANUT-E

81

APPENdIx - SOurCES A

1 Joni Leithe, Implementing Performance Measurement in Government, Government Finance Officer Association, 1997 .

2 George Labovitz and Victor Rosansky, The Power of Alignment, 1997 .

3 Governmental Accounting Standards Board, Concepts Statement No. 2, Service Efforts and Accomplishments Reporting .

4Wray and Hauer, “Best Practices Reviews for Local Government: Identifying and Sharing Better Ideas on Public Services”, Public Management, January 1996 .

5Bland and Rubin, “Budgeting: A Guide for Local Governments,” ICMA, 1997 .

6Harry P . Hatry, Performance Measurement: Getting Results, 1999 .

7Lin Grensing-Pophal, “Follow Me,” HR Magazine, February 2000 .

8Judith Hale, The Performance Consultants Fieldbook: Tools and Techniques for Improving Organizations and People, 1998 .

9Hatry, Gerhart, Marshall, Eleven Ways to Make Performance Measurement More Useful to Public Managers, September 1994 .

10Karen Carney, Successful Performance Measurement: A Checklist, 1999 .

11Bunker, Alban, Large Group Interventions: Energizing the Whole System for Rapid Change, 1997 .

Page 91: PB.ANUT-E

BEST PrACTICES IN PErfOrMANCE MEASurEMENT MOdEL - APPENdIx

82

B

PErfOrMANCE MEASurEMENT PrOCESS MOdEL

Customers and Stakeholders

Input

Management Priorities and

decisions

Congressional Priorities and

decisions

Customer driven Strategic Planning

Multi-year Goal Setting and resource Planning

Annual Performance Planning

resource Allocation

• Mission is clear and energizes employees• Strategic goals and objectives have focus and are stretching• Owners are identified for goals and objectives• Strategies are developed and resources allocated• Customer needs are addressed• Outputs and outcomes are defined (logic models or other tools are used)• decision issues and decision processes are used

Performance reporting to Customers

and Stakeholders

Establishing and updating Performance

Measures and Goals

• Management culture is supportive• Measures flow from goals and objectives and are developed by managers working with - multi-disciplined teams - focus groups and - stakeholders• Inventory of common measures is explored• Balanced scorecard or similar tools are used• Measures cascade and align through the organization• Performance levels are reflective of resources

Establishing Accountability

for Performance

• Ownership of each measure is formalized and resources provided• responsibilities for data collection, reporting, analysis and posting are identified• Managers use measures to evaluate performance• reward systems are clear and consistent, and are reflective of level of success

Measuring Performance

(data Collection and

reporting)

• data sources are identified• Information systems are designed to support data collection and reporting• Pilot tests are conducted• Automated or Manual requests are used for periodic updates• data entry, tabulation, summarization methods are documented for each measure• data definition for common measures are followed• reliability, timeliness, accuracy, rapid access and confidentiality are addressed

Analyzing and

reviewing Performance

data

• data are integrated• Analytical capabilities are developed• results are analyzed and validated• Management reviews results vs. expectations and makes mid course corrections• feedback is provided to activity/process owners for continuous improvement

Evaluating and utilizing Performance Information

• Activity/process owners use performance information for continuous improvement• results are displayed and shared with customers and stakeholders• rewards and recognition are based on results• Benchmarking and comparative analysis with best in class are done• Management feedback is provided for updating goals and measures• Performance information is used to identify opportunities for re-engineering and allocation of resources

Page 92: PB.ANUT-E

83

APPENdIx - MISSION, VISION, VALuES, GOALS C

Mission, Vision, Values One-day Workshop Agenda

8:00 Welcome Why are we here?

8:10 Background & Beginnings Agenda review, what’s happened so far, expectations, ground rules, warm-up

8:30 Environmental scan What’s going on in the community and agency that impacts the work and atmosphere of

agency staff? How much change is going on?

Large group brainstorm Outcome: description of the current situation

Or Changes, Transitions, Beginnings and Endings Presentation and discussion of the change process . Foundation materials from: Managing

Transitions – Making the Most of Change, William Bridges

9:30 honoring past efforts What has gone into getting the organization to where it is now? Respect the energy, intelligence,

sweat, motivation, spirit, and intention that has gone into bringing the organization to this point .

Large group brainstorm Outcome: List of events, actions, etc. that has contributed to our success

10:00 BrEAK

10:15 Where are we headed? Review current mission, vision, values, goals, strategic plan, etc . 5x5 Grid Exercise – where we are now, where we want to be, what will it take to move us there?

See Chapter 1, Readiness Assessment for details on this exercise . The x-axis assesses “technical skills – e .g . knowledge, quality, ability, productivity” and the y-axis assesses the “human skills that build and sustain relationships – e .g ., live the agency’s values, show care and concern for customers and fellow employees, personal accountability, teamwork .”

Outcome: Graphic that defines the now and the future. List of actions that will move the agency forward.

11:30 LuNCh

12:15 definitions Mission, Vision, Values

Page 93: PB.ANUT-E

MISSION, VISION, VALuES, GOALS - APPENdIx

84

C

12:45 develop mission statement Review current mission statement

Exercise – individual statements to group consensus

2:00 develop vision statement Review Where We Are Headed results

Exercise – group discussion

3:00 BrEAK

3:10 Select values Selection of the words that represent the desired behaviors Definitions will need to be developed later

4:00 Wrap-up, Evaluation

4:30 Adjourn

What facilitation skills do I need?

A good facilitator has the knowledge, skills, and ability to do ALL of the following:

• Create a realistic and effective agenda .

• Record accurately what is being said so everyone can see it .

• Establish appropriate ground rules .

• Create an atmosphere of openness and trust in order to help people feel free to contribute and work creatively together .

• Support everyone to do their best thinking .

• Be flexible and still accomplish the goals of the meeting .

• Remain neutral and objective .

• Keep the meeting focused and moving .

• Listen actively and ask others to do the same .

• Assist the group in dealing with conflict .

• Balance participation among the group members .

• Incorporate everyone’s point of view .

Page 94: PB.ANUT-E

85

APPENdIx - MISSION, VISION, VALuES, GOALS C

• Cultivate shared responsibility .

• Move the group through stages of group decision making and consensus .

• Protect group members and their ideas from attack or from being ignored .

• Bring closure through an action plan, decision or solved problem .

• Capture the key points of the meeting in a well written summary .

There are three specific skills that are vital to ensure a successful result: problem solving, recording, and achieving consensus . Each of these skills is described in more detail to assist you .

Problem solving skills

Collaborative problem solving skills are a must . There are many problem solving tools . The one most frequently used is brainstorming . Typical brainstorming rules include:

• Don’t criticize any ideas . No comments, no grunts or groans, no thumbs-down gestures

• No idea is too wild!• Quantity of ideas is important .• Write the ideas so everyone can see them .• Sort the ideas into groupings .• List advantages and disadvantages (if needed) .• Before you voice a criticism, you must first say what you like about the idea .• Decision-making – use consensus to arrive at a win-win solution .

Recording skills

Recording is capturing what is being communicated during the discussion so everyone can see it . With fast paced discussion this presents a challenge for the recorder . Here are some reminders to be effective at recording .

• Use exact words; do not paraphrase .• Use as few words as possible .• Wait until you know what to write .• Stay focused on the group – not the paper .• Be willing to revise .• The act of recording is as important as the words themselves .• Recording should not be confused with summary writing or minutes .• Write legibly–write fast .• Understand and use symbols and colors appropriately .

Achieving consensus

Consensus has a number of meanings to group members, so it is important to agree on what it means in every group before it is used .

Page 95: PB.ANUT-E

MISSION, VISION, VALuES, GOALS - APPENdIx

86

C

Consensus is mutual agreement among members of a group where all legitimate concerns of individuals have been addressed to the satisfaction of the group .

• Consensus is only used for making decisions• Trust is a critical factor . Everyone must be confident that the other members are speaking in good faith .• Consensus doesn’t mean compromise (of strong convictions or needs)• No one “gives up” anything• A solution is found that everyone can live with• Members can “stand aside” when they have concerns about a proposal, but they can live with it .

This signals that the person feels his or her concern has been heard, understood, and considered, although not necessarily accepted, by the group in its final decision . Standing aside is an option only for people with concerns .

• If more time is needed on an issue, the presenter needs to negotiate the additional time . A single objection stops the request for more time and the amount of time is set .

• Everyone may not think it’s the very best solution, but they can accept it without feeling that they are losing anything important

• Everyone agrees to support the decision, idea, recommendation, etc .

Consensus Voting

Depending upon the group dynamics, it may be appropriate to vote to achieve consensus . Group members vote by selecting one of 6 comfort levels to indicate their feelings regarding the decision . A working consensus is achieved at any comfort level of 4 or above (although it may be wise to provide additional time for those who voted level 4) . No one can hold up a decision with a level 5 or 6 vote unless they have an alternative proposal or solution . Even if you vote level 4 in the room, everyone must agree to give the decision 100% support when they leave the room . Objections of anyone at level 5 or 6 requires those who have objected to be specific about the reason(s) for objecting . Additional work may be needed by the group and by those objecting to reach a consensus level of 4 or above .

Comfort Level 1 I can say an unqualified “yes” to the decision . I am satisfied that the decision is an expression of the wisdom of the group .

Comfort Level 2 I find the decision perfectly acceptable .

Comfort Level 3 I can live with the decision; I am not particularly enthusiastic about it .

Comfort Level 4 I do not fully agree with the decision and need to register my view about it . However, I do not choose to block the decision . I am willing to trust the wisdom of the group .

Comfort Level 5 I do not agree with the decision and feel the need to stand in the way of this decision being accepted .

Comfort Level 6 I feel we have no sense of unity in the group . We need to do more work before consensus can be reached .

Page 96: PB.ANUT-E

87

APPENdIx - SurVEyS d

Why survey?

The best way to find out if people are satisfied with your services is to ask them . Understanding customer satisfaction is very important for all public agencies . Just how satisfied are your customers? Surveys, if designed well, can answer this question and allow you to ask frequently–this isn’t a one shot deal .

Surveying is a means of gathering information about a particular population through two options–either questioning each member of the population or sampling some of its members through a system of standardized questions conducted by telephone, mail, or personal interview . The primary purpose of a survey is to elicit information which, after evaluation, results in a profile or characterization of a population sample .

If the population is very large (e .g ., all residents of a city or county), the cost of questioning every member of the population can be very expensive . This dictates the need for a “statistically valid” survey of a portion of the larger population . A statistically valid survey assures that the results can be applied to the population in general–it fairly represents the results as if the survey were administered to the whole population . This type of survey may or may not be required depending upon the situation .

To be a statistically valid survey, the sampling methodology is called probability sampling . A probability sample is the only type of survey where the results can be generalized from the sample to the population . Non-probability sampling, while less complicated and less time-consuming to administer, does not allow the generalization of the results beyond the sample .

Non-probability sampling includes convenience sampling (e .g ., “person on the street” interviews), quota sampling (e .g ., so many respondents of a certain age or income level), and judgmental sampling (e .g . based on your knowledge and skills you select those to be surveyed) .

Probability sampling includes simple random sampling (in a random sample, each person has an equal chance of being selected from a comprehensive list of all members of a population), stratified random sampling (members of the population are categorized into groups and then a random sample is taken from each group), cluster sampling (groups that are representative of the whole population are formed and then a random sample is taken from each group), systemic sampling (every nth member of the whole population is selected after randomly selecting the starting point, e .g ., every 20th member of the group) .

The design of the survey is critical and may require external assistance (consultant) to complete – unless you have this expertise on staff .

All surveys have costs whether they are internal or external resources – design, production (graphics, forms, layout), implementation (printing, postage, distribution), compilation (documenting the results), analysis (determining what the results mean), and reporting (sharing the results) .

The American Customer Satisfaction Index

One tool being used to measure government customer satisfaction is the ACSI (American Customer Satisfaction Index) . The ACSI is a national indicator of the quality of goods and services available to the American public . Since October 1994, the ACSI has measured customer satisfaction for over 170 companies in the private sector . It was expanded in 1999 to measure satisfaction for 30 customer segments of 29 federal agencies . The agencies include most of the high impact agencies that deal with

Page 97: PB.ANUT-E

SurVEyS - APPENdIx

88

d

90% of the government’s customers, e .g . Internal Revenue Service, Social Security Administration, Veterans Administration etc .

Executive Order 12862, signed on September 11, 1993, created customer service standards for the Federal Government . It calls for “putting customers first” and striving for a “customer-driven government” that matches or exceeds the best service available in the private sector . All Executive departments and agencies were required to take the following actions:

• Identify the customers who are, or should be, served by the agency .• Survey customers to determine the kind of quality of services they want and their level of satisfaction

with existing services .• Benchmark customer service performance against the best in business .• Survey front-line employees on barriers to, and ideas for, matching the best in business .• Provide customers with choices in both the sources of service and the means of delivery .• Make information, services, and complaint systems easily accessible .• Provide means to address customer complaints .

In response to this Executive Order, the ACSI was established by a partnership of the University of Michigan Business School, the American Society for Quality, and Arthur Andersen .

The ACSI scores for key customer segments of federal agencies range from 51 to 87 on a 0-100 scale . For the 170 private sector companies measured, the range is 53 to 86 . The aggregated ACSI for private sectors is 71 .9 . The weighted satisfaction score for federal customer segments is 68 .6 . The difference in satisfaction between private and public sector services is significant, but not large .

What this ACSI benchmark study of federal agencies shows is that–in contrast to the widely held belief that trust in government is low–many specific agencies deliver services at performance levels comparable to the best in business .

Other key findings include:

• Government employees who have contact with the public receive high marks for courtesy and professionalism .

• Customers find information from information-providing federal agencies accessible, useful, and of high quality .

• Services delivered at the local level, such as Head Start and Women, Infants, and Children have high satisfaction scores .

• Aggregated for all customer segments, the quality of services received from federal agencies exceeds customers’ expectations for those services .

ACSI provides a means for benchmarking federal agencies against private sector industries and companies . It may also create an opportunity for local governments .

For details about the ACSI check out their website at: www .customersurvey .gov .

What do you want to achieve with your survey?

Before you initiate a survey, a clear, concise statement of the problem or issue to be studied and/or the information desired should be written down . It is helpful to list possible causes of the problem as well as possible solutions .

Page 98: PB.ANUT-E

89

APPENdIx - SurVEyS d

Now you are ready to write the survey objectives . The objectives should be clear before the survey questions are designed . Consider these issues in developing your objectives:

• What information is needed in order to understand the problem and its causes? Can the necessary information be obtained through means other than a survey?

• How will the information be used and by whom?• What/who is the population to be studied?• What kinds of analysis would be useful to understand the survey results? Will the resulting statistics

be appropriate for the type of sampling methodology as well as the questions to be answered?

Types of surveys

There are many types of surveys . Each time you create a survey, the best survey tools for the situation need to be selected . The types of surveys include:

Response cardsThese are postage-paid cards that contain fewer than ten questions . They can be distributed by mail, at counters, or at meetings . Respondents complete the questions and mail back the card at their leisure .

There are three limitations to this method - the amount of information that can be obtained is restricted to the size of the card, the responses are not random, and the results must be tabulated manually .

Website surveyA feedback button can be added to your website for customer comments or they can be directed to complete a brief survey . With some extra effort, survey responses can be consolidated and reported so this doesn’t need to be done manually .

The feedback button has several limitations: the responses are not random; the results need to be transcribed and manually tabulated; it is difficult to tabulate these responses because they can be about virtually anything; and someone must take the time to refer issues to specific work groups and follow up to ensure that responses are provided .

The web survey, if the response tabulation is automated, has the limitation of not being random and is, of course, limited to customers who have access to the Internet .

Interactive kiosksAn interactive kiosk allows customers to electronically respond to questions about services arranged in a menu format .

Availability and cost of a kiosk is are issues to be considered along with the concern that this method is not random . Tabulation of the results is typically built into the kiosk system .

Point-of-service surveyThese surveys are given to customers at the time the service is delivered . Service counters can be utilized for suggestion boxes or specific surveys . This works well when inquiring about the service the customer is waiting to receive or has just received . The advantage of this type of survey is that it allows specific services to be evaluated, and to assess trends .

Point-of-service surveys have some limitations: responses are not random; and results must be manually tabulated .

Page 99: PB.ANUT-E

SurVEyS - APPENdIx

90

d

Follow-up surveysAfter a period of time since the service was provided (three months, six months), a mail or telephone survey of customers of a specific service can be used to determine satisfaction . The advantage of this type of survey is it allows the customer to absorb the value of the service, or to reap the benefits of the service after it is provided . With a time lag it also provides an opportunity for the customer to forget what service was offered or how they felt about it .

Customer contact reportsImmediate customer feedback is given directly to the employee who served the customer . The employee or the customer may fill out the survey form to collect the data . This gives immediate feedback and is preferred when the service can be assessed quickly and the contact is brief with the customer . It also has the advantage of making the contact more personal and more appreciated by the customer . It can be an unpleasant experience for the employee if the customer was not happy with the service . On the other hand, that feedback may help the employee understand how the customer feels and make appropriate adjustments .

Telephone surveyTelephone surveys are relatively inexpensive and can allow for variations in questions based on screening information . They allow for a more rapid collection of data than mail surveys since data collection is automated .

The best telephone survey process provides a toll-free number for individuals to call and respond to the questions with a touch tone telephone key pad . As with all other surveys, the design of the survey is critical . A random sampling of citizens can be selected to participate via a post-card invitation . Response is by calling the toll free number . Results are automated and usually available within one to two days .

Downsides include: cost of mailing the invitation, and cost of using a telephone service bureau or purchasing your own equipment .

Other options include having staff actually call potential respondents, which requires careful training and monitoring of the interviews .

Mail surveyMail surveys allow you to handle large sample sizes at a relatively low cost . More complex questions also can be asked in a mail survey .

The cost to plan, design, and administer a mail survey can be considerable . The tabulation of results is a significant task unless the responses are barcoded or automated in some way .

Meeting SurveyTechnology is available to automate the gathering and analysis of responses from meeting participants . Hundreds of people can be accommodated with handpads . Questions are asked by a facilitator and responses given via the handpad . The results are instantly displayed in a graph format . Impromptu questions can be added .

One advantage to this process is that all responses are anonymous . The immediate summary of the responses also assists in asking clarifying questions .

The cost of the system, question design, and administration need to be considered .

Page 100: PB.ANUT-E

91

APPENdIx - SurVEyS d

Which resources are required?

Since surveys can be costly, it is critical to determine whether or not a study needs to be done by asking:

Have studies of this subject been done previously? If studies have been done in the past, it may be more efficient to use the same format, including the survey form, so that the information is updated rather than re-created .

Is this the best measure of service quality, or would a timeliness or accuracy measure make more sense? Surveys should not be chosen just because the data is easier to manipulate .

Have other departments or agencies investigated this area, and will you be surveying the same population? Caution needs to be taken to not inundate citizens with surveys .

Is there a better way to get the information?Sometimes reliable data can be obtained by proxy measures when direct information is not readily available . Proxy measures represent reasonable substitutes and may be used when cost, complexity, or timeliness prevent a result from being directly measured .

If you have determined that a survey is needed, there are several tasks that need to be done including:

• Planning and project management – timeline, budget, meeting schedule, etc .• Choosing a sample methodology .• Performing the sample .• Preparing the questionnaire .• Pre-testing the questionnaire .• Hiring and training resources if necessary – interviewers, telephone service bureau, etc .• Collecting the data .• Tabulating the data .• Analyzing the data .• Preparing the report .

The time needed to complete a survey varies based on the type of survey and the particular situation . However, it generally takes longer than anticipated . This sometimes leads to shortcuts that can invalidate the results and be misleading . The types of shortcuts or mistakes that most often occur are:

• Little thought is given to what information is really being sought and what will be done once the data is gathered . Planning is overlooked in the rush to get the job done .

• There is not a good correlation between the procedures used and the objectives of the survey . This may result in failure to obtain good data or the inability to correlate the data that is obtained .

• No pre-testing is done . Pre-testing is critical to work out the “bugs .”• The survey is a fishing expedition . Questions are asked for no good reason .• Surveys are used when other data gathering methods would be better .• There is insufficient attention to developing the questions and designing the form .• Too many questions are asked .

Page 101: PB.ANUT-E

SurVEyS - APPENdIx

92

d

Resources that may be required to complete the survey include:

• Staff time for performing the survey and guiding it through the various steps .• Staff time for developing the survey questions (and consultant costs if required) .• Contract management time (if required) .• Labor and material costs for pre-testing the questionnaire and administering it .• Labor and material costs for editing, coding, and tabulating the data from the questionnaire .• Labor and material costs for the analysis of the data and report preparation .• Telephone charges, postage, printing, consulting services .

Costs also increase with the complexity of the questionnaire and the amount of analysis required .

What needs to be considered when writing survey questions?

The type of question needs to be considered before the actual questions are written . The type of question also determines the amount of effort required to tabulate and analyze the responses .

Types of questions include:

Open-endedThese questions allow the respondent to answer a query in his/her own words . Since the data are difficult to categorize, open-ended questions are more suitable for small surveys .

Yes or NoThree types of responses are acceptable responses, “yes”, “no”, or “no opinion .” This type of question stimulates a response and does not call for a more precise rating . It’s simple for the respondent but any slight misunderstanding of the question may result in a complete reversal of the true opinion .

Ranking questionsThis type of question offers options and asks the respondent to rank from most important to least important . The standard is to rank from 1 for the most important to 5 for the least important .

Demographic questionsThese questions are simply descriptions to establish the category of the individual responding and the organization represented . Examples include age, gender, or address . Don’t ask these questions unless they are germane to the data you are trying to collect .

Checklist questionsThis type of question lists several options and asks the respondent to check those that apply . For instance, “What services would you like to see us offer? Check those that apply .” A “no opinion” option helps deal with respondents that are not familiar with a particular topic . The disadvantage of this type of question is that people with little or no information may still express an opinion to conceal their lack of knowledge on the subject .

Multiple choiceThe philosophy of this question design is that opinions are held along a graduated scale . These scales, for the purpose of the questionnaire, are usually of 3, 4, or 5 ranks . This form of question is particularly useful if the issue is not clear-cut and the question cannot be answered with a simple yes or no . The range of possible answers must be complete enough to cover the entire range of opinions, and as far as

Page 102: PB.ANUT-E

93

APPENdIx - SurVEyS d

possible, the answers should be mutually exclusive . There is a common tendency to choose the middle rather than extremes . One way to deal with this issue is to use a six-point scale (Likert scale) that forces a positive or negative response – there’s no single mid point .

rules for constructing a survey

There are many things to consider and a number of rules for constructing a survey as follows:

Clarity is essentialThe words used must mean the same thing to everyone . For example, words like “several,” “most,” and “usually” mean different things to different people . Depending upon the context, they may mean different things to the same person .

Short questions are bestThe longer the question, the more difficult it is to understand .

Avoid negative questionsNegative questions have a tendency to be misread by respondents who miss the negative word . The response is often the opposite of the true perception .

Avoid double-barreled questionsIn this type of question, the respondent is asked to respond to two questions with a single answer . They may agree with one part and disagree with the other . For example, “Although recycling is a valuable program, pick-ups should only occur once a week .” This requires judgment on two separate concepts . The respondent must make a value judgment on the recycling program prior to considering how often pick-ups should occur .

Use commonly used languageTechnical terms, jargon, acronyms, and big words have no place in most surveys . Some respondents may not understand the terminology .

Ask general questions firstIf you are asking both general and specific questions, ask the general questions first . If the specific questions are asked first, it will narrow the focus prematurely .

Avoid biased/misleading questionsSome respondents are offended by biased or misleading questions, while others are anxious to please whoever developed the survey . Either way, you don’t get the honest answers that are hoped for .

Offer “other” or “no opinion” optionsWhen using multiple choice or yes/no questions, the “other” or “no opinion” options should be included . In some cases, the respondents may be asked to explain their answer .

Avoid threatening questionsQuestions that might be threatening or put the respondent in a bad light should be avoided . For example, “Are you still avoiding paying taxes?” is not an appropriate question . It implies that the respondent has previously shirked this responsibility . In many instances, the respondent will ignore the whole survey rather than look bad .

Page 103: PB.ANUT-E

SurVEyS - APPENdIx

94

d

Be specificQuestions should be designed to yield exactly what information is being sought . For example, “How long ago?” is not as good as “How many months ago?”

Avoid ambiguous questionsAvoid phrases or words that can mean different things to different people . Ambiguous questions also include questions that are too general, words with double meanings, and conditional or limiting phrases .

Avoid tantalizing wordsDanger words like ‘abortion’ or ‘gun control’ should be avoided . Emotional words like ‘freedom,’ ‘equality,’ ‘bureaucracy,’ ‘welfare’ also should be avoided . Likewise, suggestive words like ‘reasonable’ and ‘moderate’ should be avoided .

Be reasonable with choicesMultiple choice answers should be reasonable . When seeking opinions across a range, have the same number of responses on either side of the mean . The intensity of the responses should be symmetrical .

Arrange carefullyThe arrangement of alternative responses is important . Research has shown a tendency to pick the first of two alternatives .

Keep writing to a minimumWriting by the respondent should be kept to a minimum . This applies to the question length as well as the response . The survey should be as short as possible while obtaining the required information .

Explain tough questionsQuestions appearing to be unreasonable should have an explanation for why they are being asked .

Survey design

The layout of the survey is very important . Many surveys appear to have been constructed with no thought as to the impression they will make upon the respondent . Tabulation of the data is also an important consideration when laying out the survey . Questions are sometimes asked that have no obvious connection to the stated purpose of the survey . For example, the question “Are you a male or female?” is often added just because it is commonly used on surveys, but is not really going to be used to analyze the data .

A poorly designed survey form that appears to be asking for unrelated or disorganized data can cause respondents to have a negative attitude toward the survey and may have an adverse impact on cooperation and/or seriousness of the response . This survey may be the only direct contact the respondent has with your agency, so a favorable impression is important .

The following guidelines will assist you in designing your survey:

• Make the survey look good and pleasing to the eye . Don’t crowd the text .

• Organize the questions and the form so it is easy to complete .

Page 104: PB.ANUT-E

95

APPENdIx - SurVEyS d

• Number the questions and the pages .

• Place the name and address of the person to whom it should be returned on the survey (if applicable) .

• Give brief, clear instructions in bold type on the front page .

• Say what the objectives of the survey are on the front page .

• Use examples where the form is not clear or the question is hard to understand .

• Organize the survey in a logical sequence by grouping questions with the same response options .

• Avoid placing important items at the end of the survey .

• Make questions as short as possible . Make the survey as short as possible . Each additional page will lower the percentage that will respond .

• Consider whether facts or beliefs are sought . Two people may be able to agree on the facts, but may differ on beliefs concerning these facts .

• Only include questions that have a direct bearing on the survey .

• Do not include questions where the answer can be obtained elsewhere more accurately or more easily .

• Use caution when asking potentially threatening questions .

• Use clearly worded specific questions that can be answered briefly .

• Always keep in mind what depth of analysis will need to be done .

• Separate answer keys should not be used on the form, since they can lead to unintended responses . They also may have an unpleasant association with testing in an academic environment .

Pretesting

All surveys should be pretested, a process that can detect ambiguities, negative wording, and threatening questions . What may be obvious to the survey designer may be vague to the respondent .

The population surveyed in the pretest should be similar to the target population if possible . During the pretest, space should be provided for respondents to comment on the questions . Respondents should be able to comment on ambiguity, whether other questions should be asked and anything else that might improve the survey . The number of participants in the pretest is dependent on the total population of the survey . It is recommended that at least five participants pretest a small survey and up to 30 pretest a large survey .

With the results of the pretest in hand, it is time to analyze preliminary responses . You should assess whether the questions are well designed and if the survey works from all perspectives . If there are sharp differences in the responses to a specific question, it may be appropriate to construct additional or modified questions to help understand the reasons .

Page 105: PB.ANUT-E

GLOSSAry - APPENdIx

96

E

Accountability

• Accountability requires governments to answer to the citizenry – to justify the raising of public resources and the purposes for which they are used . Government accountability is based on the belief that citizens have a right to know, a right to receive openly declared facts that may lead to public debate by the citizens and their elected representatives .3

• The requirement to render an account or explain one’s actions to someone else who has the authority or power to assess performance and to make a judgment and take action .9

• Making and keeping commitments . Doing well what you were hired or elected to do .

Aggregation/disaggregation

• Showing many layers of data combined into one (aggregation); showing data broken out into meaningful parts (disaggregation) .

Baseline Data

• The initial collection of data to establish a basis for comparison or evaluation .

Benchmark

• The process of comparing performance to a goal, past performance, or another program’s measurement data .

• A standard or point of reference used in measuring and/or judging quality or value .

Benchmarking

• The process of identifying and importing best practices to improve performance .

• The process of continuously comparing and measuring an agency against recognized leaders to gain information that helps the agency take action to improve its performance .

Best-in-Class

• Outstanding performance within an industry or sector . “Best practice” is a synonym .

Bias

• Error, or distorted and unreliable survey results . All surveys contain some bias . Bias is increased when the respondents are not representative of the population being surveyed, when questions are poorly written or misunderstood, or when the researcher uses inappropriate techniques to analyze the data .

Continuous Improvement • Ongoing, incremental steps taken to enhance service delivery by improving efficiency and/or

effectiveness .

Page 106: PB.ANUT-E

97

APPENdIx - GLOSSAry E

Customer • The person or group that establishes the requirements of a process and receives or uses the outputs

of that process, or the person or entity directly serviced by the agency .

Data

• The collection of observations and information resulting from the survey process .

Effectiveness/Outcome Measures

• Program effectiveness is the degree to which the program yields desired outcomes . Effectiveness and outcome are synonymous .

• Effectiveness, which determines the relationship of an agency’s outputs to what an organization is intended to accomplish .

• These measures are designed to report the results (including quality) of the service . Effectiveness measurement is a method for examining how well a government is meeting the public purpose it is intended to fulfill . Effectiveness refers to the degree to which services are responsive to the needs and desires of a community . It encompasses both quantity and quality aspects of a service .

Efficiency Measures (cost effectiveness)

• Indicators that measure the amount of resources required to produce a single unit or output, or to achieve a certain outcome . These measures inform everyone how well resources were used to achieve intended aims by comparing input indicators with output and outcome indicators . For example, an input-output comparison would include cost per lane-mile of road repair, whereas an input-outcome measure would be cost per lane-mile of road maintained in good or excellent condition .

• Efficiency measures reflect the relationship between work performed and the resources required to perform it . They are typically presented as unit costs .

• The relationship between the amount of input and the amount of output or outcome of an activity or program . If the indicator uses outputs and not outcomes, an agency that lowers unit cost may achieve a measured increase in efficiency at the expense of the outcome of the service .

• Efficiency measurement is a method for examining how well a government is performing the things it is doing without regard to whether those are the right things for the government to do . Specifically, efficiency refers to the ratio of the quantity of the service provided (e .g ., tons of refuse collected) to the cost, in dollars or labor, required to produce the service .

• Indicators that measure the cost (whether in dollars or employee hours) per unit of output or outcome . Examples are cost per million gallons of drinking water delivered to consumers, or cost per thousand gallons of effluent treated to a certain level of quality .

Explanatory Information3

• Two types of quantitative explanatory information are reported with SEA (service effort and accomplishment) measures – factors substantially outside the control of the agency such as

Page 107: PB.ANUT-E

GLOSSAry - APPENdIx

98

E

environmental and demographic characteristics (e .g ., the density of population in the area where public transit is being provided, or the percentage of trucks in vehicle traffic), and factors over which the agency has significant control such as staffing patterns (e .g ., the number of buses in service per route-mile, or the type of construction used for highways) .

• Explanatory data for performance measurement will vary from service to service, but most likely will include some physical and climatic characteristics (rain, snow, type of soil and water, terrain), as well as organizational/jurisdictional information (size of the municipality, number of people served, etc .) . This information may be of great importance, especially for benchmarking . Other potential factors include:

• Conditions and problems in the community• Interest of its elected officials and managers• Abilities of the staff • Resources available for improvements • Strength of public employee unions • Cultural and social factors

Indicators

• Indicators are the numbers that represent the measures . They offer a way to quantify changes - whether they are in-creasing or decreasing .

• A specific numerical measurement for each aspect of performance under consideration .

Input Measures

• These are measures of the resources an agency uses to provide a service, such as total dollars spent, or number of garbage trucks used .

• Input measures address the amount of resources (dollars, employee hours, etc .) used in providing a particular service .

• Resources used to produce outputs and outcomes .

• Indicators that are designed to report the amount of resources, either financial or other (especially personnel), that have been used for a specific service or program .

Measures of Accomplishments9

• Accomplishment measures report what was provided and achieved with the resources used . There are two types of measures of accomplishments – outputs and outcomes .

Measures of Efforts3

• Efforts are the amount of financial and non-financial resources (in terms of money, material, etc .) put into a program or process . Measures of service efforts also include ratios that compare financial and non-financial resources with other measures that may include potential demand for services, such as general population, service population, or lane-miles of road .

Page 108: PB.ANUT-E

99

APPENdIx - GLOSSAry E

• Financial information: This information includes financial measures of expenditures / expenses . These measures include the cost of providing the service, including salaries, employee benefits, materials and supplies, contract services, equipment, etc . For example, measures of efforts may include the amount spent on public transit and the amount spent on public transit per commuter; the amount spent on road maintenance and the amount spent per lane-mile of road on road maintenance .

• Non-financial information: Number of personnel – because personnel are the major resource for most agencies, departments, programs, and services, indicators that measure the number of full-time-equivalent employees or employee-hours used in providing a service often are appropriate measures of resources used . These measures have the effect of removing wage, benefit, and cost-of-living differences from the resource inputs, and may facilitate comparisons over time and with other agencies . For example, one measure would be the number of road maintenance workers in total or per lane-mile .

• Other measures – these may include the amount of equipment (such as number of vehicles) or other capital assets (such as lane-miles or road) used in providing a service .

Measures that Relate Efforts to Accomplishments3

• Efficiency measures that relate efforts to outputs of services: These indicators measure the resources used or per unit of output or cost; for example, in dollars, employee-hours, or equipment used . They provide information about the production of an output at a given level of resource use and demonstrate an agency’s relative efficiency when compared with previous results, internally established goals and objectives, generally accepted norms or standards, or results achieved by similar agencies . For example, measures may include the cost per transit passenger or per passenger-mile, or the cost per lane-mile of road repaired in total or repaired to good condition .

• Cost–outcome measures that relate efforts to the outcomes or results of services: These measures report the cost per unit of outcome or result . They relate costs and results so that management, elected officials, and the public can begin to assess the value of the services provided by the agency . For example, cost-outcome measures may include the cost per transit passenger arriving at his or her stop within a specific time schedule, or the cost per lane-mile of road improved or maintained in excellent or good condition .

Metrics • Categories of information that define the overall performance of an agency, e .g ., productivity,

satisfaction .

• The elements of a measurement system consisting of key performance indicators, measures, and measurement methodologies .

Outcomes

• Measures that assess how well a service’s goals and objectives are accomplished . Outcome measures indicate the quality or effectiveness of a service . For example, cleanliness ratings based on routine inspections describe a city’s success (or lack thereof) at cleaning its streets .

Page 109: PB.ANUT-E

GLOSSAry - APPENdIx

100

E

• An assessment of the results of a program activity as compared to its intended purpose .

• An event, occurrence, or condition that is outside the activity or program itself and of direct importance to customers and the public generally . An outcome indicator is a measure of the amount and/or frequency of such occurrences . Service quality also is included under this category .

• End Outcomes: The end result that is sought, e .g . the community having clean streets .

• Intermediate Outcomes: An outcome that is expected to lead to a desired end, but is not an end itself . A service may have multiple intermediate outcomes .

• These indicators measure accomplishments or results that occur (at least partially) because of services provided . Results also include measures of public perceptions or outcomes . For example, measures may include the percentage of population being served by public transportation or the percentage of lane-miles of road in excellent, good, or fair condition . Outcome measures are particularly useful when presented as comparisons with results from previous years, agency established targets or goals and objectives, generally accepted standards, or other comparable agencies . For example, measures may include 25% of the population being served by public transportation when the transit system’s objective is to serve at least 35% of the population or where the norm for similar transit systems is that 30% of the public is being served, or 88% of the lane-miles of road in excellent or good condition when the agency’s objective is for at least 85% of the lane-miles of road to be in excellent or good condition or where an average of 80% of the lane-miles of road were in excellent or good condition for the previous five years .

Output Measures (workload)

• Indicators of the amount of service provided . For example, number of miles of road overlaid, or tons of garbage collected .

• Output indicators report the quantity or volume of products and services pro-vided by a program . Another term commonly referred to is workload indicators, e .g . volume statistics .

• Completed activity: Outputs refer to internal activity and the amount of work done within the agency .

• Tabulation, calculation, or recording of activity of effort .

• Products and services delivered: Output refers to the completed products of internal activity and the amount of work done within the agency or by its contractors (such as number of miles of road repaired) .

• Indicators that report units produced or services provided by a program . Workload measures indicate the amount of work performed or the amount of services received .

• Quantity of a service provided – these indicators measure the physical quantity of a service provided . For example, measures may include the number of passenger miles provided by public transit or the number of lane-miles of road repaired . Quantity of a service provided that meets a certain quality requirement - these indicators measure the physical quantity of a service provided that meets a test of quality . For example, measures may include the percentage of buses meeting a prescribed on-time standard of achievement or the percentage of lane-miles of road repaired to a certain minimum

Page 110: PB.ANUT-E

101

APPENdIx - GLOSSAry E

satisfactory condition . In some cases, meeting a quality requirement may turn an “output” indicator into an “outcome” indicator .

Performance Auditing

• When the performance management system is in place, the performance measures make sense, and there is agreement on them, then performance auditing is performed . The auditing includes verification of actual performance compared to the performance measures .

Performance Communication

• The frequent, ongoing, often informal sharing, discussion and feedback on performance .

Performance Goal

• A target level of an activity expressed as a tangible measurable objective against which actual achievement can be compared .

Performance Management

• The use of performance measurement information to set performance goals, allocate and prioritize resources, review results, communicate with all stake-holders, and reward performance .

Performance Measurement

• An assessment of how an agency performs at providing services .

• Measuring the “so what” of our programs and services .

• Government’s way of determining whether it is providing a quality product at a reasonable cost .

• A process of assessing progress toward achieving predetermined goals, including the efficiency with which resources are transformed into goods and services (outputs), the quality of those outputs and outcomes, and the effectiveness in terms of their specific contributions to program goals .

Performance Measures

• Indicators used to show, for example, (1) the amount of work accomplished, (2) program outcomes, (3) the efficiency with which tasks were completed, and (4) the effectiveness of a program .

• Several measurable values that contribute to the understanding and quantification of indicators .

Population

• The universe or collection of all elements (persons, businesses, etc .) being described or measured by a sample .

Page 111: PB.ANUT-E

GLOSSAry - APPENdIx

102

E

Pretest

• An initial evaluation of the survey design by using a small sub-sample of the intended population for preliminary information .

Process Owner

• The individual who possesses managerial control over a particular process .

Productivity

• The cost per unit of goods or services, holding quality constant . Productivity increases when the cost per unit goes down but quality remains constant or increases .

• Productivity quantifies the outputs and inputs of an organization and expresses the two as a ratio . Generally, the ratio is expressed as output to input (for example, inspections per staff-day) .

• Indicators that combine the dimensions of efficiency and effectiveness in a single indicator . For instance, whereas “meters repaired per labor hour” reflects efficiency, and “percentage of meters repaired properly” (e .g ., not returned for further repair within 6 months) reflects effectiveness, “unit costs (or labor-hours) per effective meter repair” reflects productivity . The costs (or labor-hours) of faulty meter repairs as well as the costs of effective repairs are included in the numerator of such a calculation, but only good repairs are counted in the denominator—thereby encouraging efficiency and effectiveness of and by meter repair personnel .

Program

• Groupings of routine activities aimed at providing support for a certain service .

Proxy or Surrogate Measures

• Occasionally direct measurement of a specific outcome is not possible, either initially or at all . In these cases, a proxy or surrogate measure is needed to track progress on those outcomes . These measures are substitutes that are used when cost, complexity or timeliness prevent a result from being measured directly . Proxy or surrogate measures also may be used if the agency wants to measure service quality without surveying for customer satisfaction or if the beneficiary is an entity like the environment . In these cases, a proxy measure may be the best or only way to capture performance measurement data .

Quality

• Quality examines an output or the process by which an output is produced . Quality is indicated by attributes such as accuracy (or error rate), thoroughness, and complexity .

Respondent

• An element or member of the population selected to be sampled .

Page 112: PB.ANUT-E

103

APPENdIx - GLOSSAry E

Sample

• Any portion of the population, less than the total .

Survey

• A sampling or partial collection of facts, figures or opinions taken and used to approximate or indicate what a complete collection and analysis might reveal .

Service Quality

• The degree to which customers are satisfied with a program, or how accurately or timely a service is provided; e .g ., percent of respondents satisfied or average days to address a facility work order .

Strategic Direction

• The agency’s goals, objectives, and strategies by which it plans to achieve its mission, vision and values .

Strategic Goal

• A long-range target that guides an agency’s efforts in moving toward the desired vision .

Strategic Objective

• A time-based measurable accomplishment required to realize the successful completion of a strategic goal .

Strategic Planning

• A continuous and systematic process whereby an agency makes decisions about its future, develops the necessary procedures and operations to achieve that future, and determines how success is to be measured .

Target

• A mark to shoot at; a short-term goal to be achieved .

Timeliness

• Timeliness evaluates the time involved to produce an appropriate output .

Unit Cost

• The cost to provide one unit of service / product . Unit costs are calculated by dividing the total costs of a service or function by the number of units provided .

World Class

• Leading performance in a process, independent of industry or geographic location .

Page 113: PB.ANUT-E