listen and learn - office of the auditor general...listen and learn 2 executive summary this...

37
LISTEN AND LEARN Using customer surveys to report performance in the Western Australian public sector P E R F O R M A N C E E X A M I N A T I O N Report No 5 – June 1998 Western Australia A U D I T O R G E N A U D I T O R G E N E R A L E R A L

Upload: others

Post on 26-Jun-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: LISTEN AND LEARN - Office of the Auditor General...LISTEN AND LEARN 2 Executive Summary This performance examination appraised the rigour with which public sector agencies had conducted

LISTEN AND LEARN

Using customer surveys to report performance

in the Western Australian public sector

PERFORMANCE

EXAMINATION

Report No 5 – June 1998

W e s t e r n A u s t r a l i a

A U D I T O R G E NA U D I T O R G E N E R A LE R A L

Page 2: LISTEN AND LEARN - Office of the Auditor General...LISTEN AND LEARN 2 Executive Summary This performance examination appraised the rigour with which public sector agencies had conducted

© 1998 Office of the Auditor General Western Australia. All rights reserved.This material may be reproduced in whole or in part provided the source is acknowledged.

Clipart from Microsoft Clipart.

4th Floor Dumas House

2 Havelock Street

West Perth WA 6005

Telephone: (08) 9222 7500

Facsimile: (08) 9322 5664

E-mail: [email protected]

http://www.audit.wa.gov.au/

A U D I T O R G E N E R A L

W e s t e r n A u s t r a l i a

The Office of the Auditor General is acustomer focused organisation and is

keen to receive feedback on the qualityof the reports it issues.

Through Performance Auditing enable the Auditor General

to meet Parliament’s need for independent and impartial

strategic information regarding public sector

accountability and performance.

MISSIONMISSIONof theof the

Office of the Auditor GeneralOffice of the Auditor General

VISIONVISION

Office of the Auditor GeneralOffice of the Auditor Generalof theof the

Leading in Performance Auditing

Page 3: LISTEN AND LEARN - Office of the Auditor General...LISTEN AND LEARN 2 Executive Summary This performance examination appraised the rigour with which public sector agencies had conducted

Report No 5 – June 1998

LISTEN AND LEARN

Using customer surveys to report performancein the Western Australian public sector

W e s t e r n A u s t r a l i a

A U D I T O R G E N E R A L

Page 4: LISTEN AND LEARN - Office of the Auditor General...LISTEN AND LEARN 2 Executive Summary This performance examination appraised the rigour with which public sector agencies had conducted

AUDITOR GENERAL

Western Australia

THE SPEAKER THE PRESIDENTLEGISLATIVE ASSEMBLY LEGISLATIVE COUNCIL

PERFORMANCE EXAMINATION — LISTEN AND LEARN – Using customer surveys to

report performance in the Western Australian public sector

This Report has been prepared consequent to examinations conducted under section 80 of the

Financial Administration and Audit Act 1985 for submission to Parliament under the provisions

of section 95 of the Act.

Performance examinations are an integral part of my overall Performance Auditing Program and

seek to provide Parliament with assessments of the effectiveness and efficiency of public sector

programs and activities, thereby identifying opportunities for improved performance.

The information provided through this approach will, I am sure, assist Parliament in better

evaluating agency performance and enhance Parliamentary decision making to the benefit of all

Western Australians.

D D R PEARSON

AUDITOR GENERAL

June 24, 1998

Page 5: LISTEN AND LEARN - Office of the Auditor General...LISTEN AND LEARN 2 Executive Summary This performance examination appraised the rigour with which public sector agencies had conducted

Contents

Executive Summary 1

Overall Findings and Conclusions 2

Summary of Recommendations 4

Introduction 6

Performance Indicator Trends 6

Types of Customer Feedback 7

Examination Focus and Approach 8

Survey Costs and Utilisation 10

Conclusions 10

The Cost of Customer Surveys 10

Contracting Out Customer Surveys 12

Utilising Survey Findings 13

Recommendations 14

Survey Accuracy and Reporting 15

Conclusions 15

Technical Accuracy of Customer Surveys 15

Survey Reporting and Record Keeping 24

Recommendations 26

Appendix 1: Key Steps in Conducting Customer Surveys 27

Appendix 2: Judging the Quality of Survey Research: A Checklist 29

Appendix 3: A Guide to Sample Size Requirements 31

Performance Examination Reports 32

Page 6: LISTEN AND LEARN - Office of the Auditor General...LISTEN AND LEARN 2 Executive Summary This performance examination appraised the rigour with which public sector agencies had conducted

Using customer surveys to report performancein the Western Australian public sector

1

Executive Summary

The Western Australian public sector is increasingly using the results of

customer satisfaction surveys as one of the indicators used to report on its

performance.

Excluding health service agencies, the number of agencies reporting customer

satisfaction as an effectiveness indicator has doubled in the past three years.

Customer satisfaction is now the most frequently reported indicator of

effectiveness – used by 66 per cent of all agencies in 1996–97 annual reports.

Despite the most careful procedures, all surveys involve potential errors

that can introduce uncertainty or bias. For the results to be credible, error

must be reduced whenever possible and reported results should disclose any

significant survey limitations (see Figure 1). The goal is not technical

perfection but credibility and appropriateness in relation to intended use.

Intended use within the Western Australian public sector can entail:

� An ongoing and developing accountability relationship requiring agencies

to report on their performance to Parliament – without high standards

of rigour it is easy to inadvertently mislead;

� An increasing number of policy and resource allocation decisions within

government being based on customer feedback; and

� The linking of customer satisfaction data to public sector performance

pay and Chief Executive performance agreements.

Figure 1: Achieving Survey Rigour and Credibility

If publicly reported customer satisfaction surveys are to have credibility, certain technical

standards need to be met.Source: OAG

data collectionanalysis and reporting

record keeping

sampling precisionresponse rates

unbiased sampling

reliability

validity

Page 7: LISTEN AND LEARN - Office of the Auditor General...LISTEN AND LEARN 2 Executive Summary This performance examination appraised the rigour with which public sector agencies had conducted

Auditor General WA

LISTEN AND LEARN»»

2

Executive Summary

This performance examination appraised the rigour with which public sector

agencies had conducted selected customer satisfaction surveys. Seven surveys,

each from a different agency, were examined as in-depth case studies. The

examination reveals issues that are likely to be relevant to all public sector

agencies.

Overall Findings and Conclusions

Survey Costs

Survey costs ranged from an estimated $3 000 to $74 000, the more costly

being larger and of better quality. The average survey cost per respondent

varied from just over $30 to about $160. Cost per respondent was not a

good indicator of survey rigour.

Small budget agencies with large numbers of customers may find it difficult

to meet the cost of undertaking reliable annual customer surveys. Alternatives

include sharing survey costs with other agencies that have similar customer

groups. This would also reduce the risk of ‘respondent overload’ when the

same group of customers is repeatedly surveyed by different government

agencies.

Five of the agencies used consultants to assist in their surveys. All agencies

received value for what they paid. However agencies still need to possess,

or have access to, survey knowledge and skills in order to manage consultants.

Utilising Survey Findings

The seven agencies examined were able to demonstrate, to varying degrees,

actions that they had taken as a result of feedback from customer surveys.

The degree of Chief Executive support, and the linking of surveys to other

service improvement strategies, were important factors that assisted agency

utilisation of customer satisfaction surveys.

Page 8: LISTEN AND LEARN - Office of the Auditor General...LISTEN AND LEARN 2 Executive Summary This performance examination appraised the rigour with which public sector agencies had conducted

Using customer surveys to report performancein the Western Australian public sector

3

Executive Summary

Survey Accuracy

Many agencies have made a considerable effort to tackle this new and difficult

area of measuring the satisfaction of public sector customers. However, all

of the surveys examined had some weaknesses and several had many.

Sampling Methods

Three of the seven agencies were able to demonstrate that they had taken a

rigorous approach to developing a list of their customers, and then taking a

sample of people from this listing. The other four agencies had difficulties

such as defining their customer groups and constructing a customer database.

Response Rates

Low response rates (under 50 per cent) were observed in five of the seven

surveys examined. This increases the risk of survey bias, which can give

misleading results. Only one agency undertook some further analysis to try

and determine the direction and extent of this potential bias.

Sampling Precision

Sampling precision ranged from an excellent low of ± 3 per cent to an

unacceptable high of ± 17 per cent. In the latter case, a reported measure of

57 per cent customers satisfied with the program’s performance could mean

that anywhere between 40 per cent and 74 per cent were satisfied. Agencies

with a high level of sampling error in their surveys were generally unaware

of its significance. Improving the precision of agency surveys can be achieved

by taking larger samples of agency clients, and by reducing levels of non-

response.

Validity and Reliability of Measurement

Validity addresses the issue ‘Does the survey really measure what it is supposed

to measure?’. Developing a valid survey for performance reporting purposes

requires agency objectives that are clear, specific and measurable. Survey

validity has to be planned for and tested at the survey design phase. None of

the agencies had done this. Using the most basic of tests, one of the surveys

appeared to have problems with survey validity.

Page 9: LISTEN AND LEARN - Office of the Auditor General...LISTEN AND LEARN 2 Executive Summary This performance examination appraised the rigour with which public sector agencies had conducted

Auditor General WA

LISTEN AND LEARN»»

4

Executive Summary

Reliability is about consistency of measurement and needs to be considered

as part of a survey’s development. None of the agencies had done this, which

is of particular concern for the three surveys that had a limited number of

respondents. These three surveys are at more risk of being unreliable and

hence their reported performance indicators are more likely to be inaccurate.

Survey Reporting and Record Keeping

Several of the agencies have made errors in the analysis and/or interpretation

of their survey findings. In particular, they had problems in interpreting

changes in customer satisfaction ratings over time. This appeared to be a

problem of agency expertise, rather than a matter of deliberate

misrepresentation.

The reported performance indicators often contained little additional

information to assist readers assess agency performance. For example they

did not include a standard or a benchmark against which to compare agency

performance. They also did not generally include explanatory notes to inform

the reader about the survey’s technical limitations.

Agencies and/or their consultant were able to provide the examination team

with copies of all requested documents generally with minimal delay. However

managers were unsure of the retention periods that applied to these records.

Summary of Recommendations

Agencies should:

�� ensure that their surveys are conducted in a scientifically rigorous manner

so as to minimise all types of survey error;

�� present their performance indicators in conjunction with relevant

supporting information such as comparative benchmarks and the survey’s

technical limitations;

�� assess the cost-effectiveness of undertaking customer surveys,

particularly those agencies with small budgets and a large number of

customers;

Page 10: LISTEN AND LEARN - Office of the Auditor General...LISTEN AND LEARN 2 Executive Summary This performance examination appraised the rigour with which public sector agencies had conducted

Using customer surveys to report performancein the Western Australian public sector

5

Executive Summary

�� ensure that they have access to sufficient survey knowledge to effectively

manage consultants when contracting out their surveys;

�� be aware of the possibility of over burdening their clients with requests

to participate in customer surveys. Agencies with common clients should

liaise with each other to avoid this situation; and

�� use their survey findings as a tool to assist in service improvement and

as a means of demonstrating accountability.

Page 11: LISTEN AND LEARN - Office of the Auditor General...LISTEN AND LEARN 2 Executive Summary This performance examination appraised the rigour with which public sector agencies had conducted

Auditor General WA

LISTEN AND LEARN»»

6

Introduction

Public sector agencies conduct customer satisfaction surveys for two main

reasons:

� for feedback to monitor and improve service; and

� as a means of demonstrating public accountability for performance.

In addition, key customer service objectives are reflected in Chief Executive

Officer performance agreements. Similarly, customer satisfaction surveys

can also be used as staff productivity measures within agency enterprise

bargaining agreements and workplace agreements.

Performance Indicator Trends

Within the Western Australian public sector, all departments and most

statutory authorities are required to report key performance indicators in

their annual report to Parliament. These indicators are an essential component

of public sector accountability, enabling Parliamentarians and citizens to

assess the efficiency and effectiveness of public sector operations. They are

audited by the Auditor General.

Excluding hospitals and other health services, the number of agencies

reporting customer satisfaction1 as an indicator of their effectiveness has

doubled from around 40 to over 80 in the past three years. For all public

sector agencies, it was the most frequently used measure of effectiveness in

1997 Annual Reports (Figure 2).

1 The terms customer satisfaction and customer feedback are used interchangeably in this report.

Page 12: LISTEN AND LEARN - Office of the Auditor General...LISTEN AND LEARN 2 Executive Summary This performance examination appraised the rigour with which public sector agencies had conducted

Using customer surveys to report performancein the Western Australian public sector

7

Introduction

Figure 2: Types of Effectiveness Performance Indicators

Customer satisfaction is the most frequent measure of effectiveness in Annual Reports –

reported by 66 per cent of all agencies in 1997.Source: OAG

Being responsive to customer feedback is part of the Government’s public

sector customer focus strategy, launched by the Premier in 1994. The primary

aim of the strategy is to ensure that the Western Australian public sector

continuously improves service delivery, and provides value for money service

to the community of Western Australia. Agencies are required to develop

customer service charters as part of the strategy.

Types of Customer Feedback

The conclusions one can draw from customer feedback depend upon how the

information has been obtained. For example, casual comments from customers

can offer insights that help to improve service delivery, but a rigorous scientific

survey is needed to yield results that can be generalised with reasonable

certainty to all of the agency’s customers.

Customer satisfaction

Timeliness measures

Defined outputs

Independent accreditation

Client awareness and behaviour

Industry benchmarks

Output indexes

Economy activity

0 10 20 30 40 50 60 70

Percentage of Agencies using Effectiveness Indicators

Page 13: LISTEN AND LEARN - Office of the Auditor General...LISTEN AND LEARN 2 Executive Summary This performance examination appraised the rigour with which public sector agencies had conducted

Auditor General WA

LISTEN AND LEARN»»

8

Introduction

Some common approaches to obtaining customer feedback, and their

appropriate usage, include:2

Suggestion boxes

Provide a relatively non-threatening way for customers to express their

preferences and make suggestions.

Complaint handling

Establish formal systems to record customer complaints. Seek to immediately

address complaints as they arise in addition to identifying any common or

recurring patterns over time.

Focus groups

Focus groups have been used in marketing for a long time. They are used to

get direct reactions from customers on goods and services offered and to

provide an opportunity for consumers to speak. They are especially useful in

identifying needs and assessing issues relating to the introduction of a service.

Customer surveys

Scientifically rigorous surveys are particularly appropriate when precise and

unbiased information is required to support major management decisions,

and for the purposes of demonstrating accountability. For a description of

the key steps in undertaking customer surveys, see Appendix 1.

Examination Focus and Approach

The aims of the performance examination were to:

� assess the technical quality of selected agency customer satisfaction

surveys;

� identify common issues and problems and examples of good practice;

and

� overview the differing survey costs and approaches.

2 For a discussion of these and other approaches see: OECD, 1996, Responsive Government –Service Quality Initiatives, author, Paris.

Page 14: LISTEN AND LEARN - Office of the Auditor General...LISTEN AND LEARN 2 Executive Summary This performance examination appraised the rigour with which public sector agencies had conducted

Using customer surveys to report performancein the Western Australian public sector

9

Introduction

Seven agencies were purposefully selected to provide a reasonable cross-

section: large versus small budgets; service delivery agencies versus policy

advising bodies. Given the aims and timeframes for the examination, it was

considered appropriate to undertake a limited number of in-depth case

studies, as opposed to surveying a representative sample of agencies.

From each of the seven selected agencies one customer survey, included as a

performance indicator in the agency’s 1997 annual report, was selected for

examination. The seven surveys selected were:

� Customer Perception Survey – Department for Family and Children’s

Services (FCS);

� Land Operations Survey – Department of Land Administration (DOLA);

� Survey of Local Governments – Department of Local Government (DOLG);

� Customer Perception Survey – Department of Transport, Maritime

Division (Transport);

� National Tenant Satisfaction Survey – The State Housing Commission

(Homeswest);

� Survey of Community Organisations – Office of Multicultural Interests

(OMI); and

� Key Performance Indicator Survey – Rottnest Island Authority (RIA).

Any other customer satisfaction surveys conducted by the agencies selected

were not examined in detail. In addition, the examination did not attempt to

assess whether the agencies’ objectives were best measured by a customer

satisfaction survey or by an alternative methodology. It simply appraised

the rigour with which each survey was conducted. Finally, this report is not

intended as a general guide to the development and reporting of performance

indicators.3

3 See:

� Preparing Performance Indicators – A Practical Guide (joint publication Treasury Department ,Public Sector Management Office, Office of the Auditor General – April 1997);

� Under Wraps! Performance Indicators in Western Australian Public Hospitals (special report No.4, Office of the Auditor General – August 1996);

� Output Based Management (OBM) – Guidelines to Assist Agencies (Treasury Department – July1996); and

� Public Sector Performance Indicators (special report No. 7, Office of the Auditor General –December 1994).

Page 15: LISTEN AND LEARN - Office of the Auditor General...LISTEN AND LEARN 2 Executive Summary This performance examination appraised the rigour with which public sector agencies had conducted

Auditor General WA

LISTEN AND LEARN»»

10

Survey Costs and Utilisation

Conclusions

� Agencies have to make a trade-off between survey cost and obtaining

quality data – for some agencies containing costs was the primary

consideration.

� The level of Chief Executive support and the linking of surveys to other

service improvement strategies were important factors that assisted

agencies with the utilisation of survey findings.

The Cost of Customer Surveys

Survey and Was the Data Number of Total Cost perAgency Survey Collection Completed Survey RespondentNames Contracted Method Surveys Cost (1) (estimated)

Out? (estimated)

Customer Perception Yes Telephone interviews & 1 012 $39 704 $39

Survey – FCS mailed questionnaires

Land Operations No Mailed questionnaires 29 NA (2) NA

Survey – DOLA

Survey of Local No Faxed questionnaires 34 $5 464 $161

Governments

– DOLG

Customer Perception Yes Mailed questionnaires 769 $25 250 $33

Survey – Transport & telephone interviews

National Tenant Yes Face to face interviews 1 125 $74 036 $66

Satisfaction Survey & mailed questionnaires

– Homeswest

Survey of Community Yes Telephone interviews 24 $2 800 $117

Organisations – OMI

Key PI Survey – RIA Yes Telephone interviews 150 $5 500 $37

Table 1: Survey Methods and Costs

The seven surveys examined differed in size, cost and methodology.

Notes:(1) Cost estimates were based on known consultant costs plus estimated direct salary costs

of in-house staff – excepting DOLG’s estimate which was based on direct salary costs

only.(2) NA = Not available. DOLA was unable to separate its in-house staffing costs for this

particular survey from its other organisational customer focus initiatives.

Source: Agencies and OAG

Page 16: LISTEN AND LEARN - Office of the Auditor General...LISTEN AND LEARN 2 Executive Summary This performance examination appraised the rigour with which public sector agencies had conducted

Using customer surveys to report performancein the Western Australian public sector

11

Survey Costs and Utilisation

There was considerable variation across agencies in survey approaches and

costs.

OMI and RIA viewed their survey primarily as a means to obtain data for

performance reporting purposes. DOLA focused more upon the survey’s

potential for obtaining information to improve agency services. The remaining

agencies sought more of a balance between these two purposes.

The small budget agencies generally undertook a simple survey of a small

number of customers, while the larger budget agencies obtained more

comprehensive information from a large number of customers. As a result,

the total estimated cost of agency surveys varied from a low of just under

$3 000 by OMI to a high of about $74 000 by Homeswest.

Total survey costs are driven primarily by the number of respondents and

the data collection method utilised. This requires agencies to make an

informed trade-off between the cost and timeliness of their survey versus

obtaining high quality data. Small budget agencies with large numbers of

customers may find it difficult to meet the cost of undertaking reliable and

meaningful annual customer surveys. Such agencies may need to consider

other options such as sharing survey costs with other agencies that have

similar customer groups or undertaking customer satisfaction surveys less

frequently.

In addition to cost considerations, there may be other advantages in agencies

with similar customer groups coordinating their surveys. In particular it

reduces the risk of ‘respondent overload’ when the same group of customers

is repeatedly surveyed by different government agencies.

Survey cost per respondent also varied from a low of $33 for Transport

through to a high of $161 for DOLG. There were many factors influencing

the survey cost per respondent including:

� the presence of economies of scale;

� the methods of data collection used – face to face interviews are more

expensive than telephone interviews and mailed questionnaires;

� the survey response rate – lower response rates leading to higher unit

costs; and

� an agency focus upon cost minimisation rather than data quality.

Page 17: LISTEN AND LEARN - Office of the Auditor General...LISTEN AND LEARN 2 Executive Summary This performance examination appraised the rigour with which public sector agencies had conducted

Auditor General WA

LISTEN AND LEARN»»

12

Survey Costs and Utilisation

Contracting Out Customer Surveys

Five of the seven agencies examined used a consultant to conduct their

survey. Consultants were generally used to initially develop the agency’s

survey, to collect data from the agency’s customers, and to then prepare a

report on the findings. The most common reasons agencies gave for using a

consultant were:

� to obtain expertise not available in-house;

� to encourage customers to respond to the survey in a full and frank

manner which could be compromised if the agency undertook the data

collection itself; and

� to give additional assurance that data collection and analysis was

undertaken in an impartial manner.

All agencies received value for what they paid. However agencies still need

to possess, or have access to, survey knowledge and skills even if they wish

to contract out their surveys. Without such skills, they risk not being able

to adequately direct, control and evaluate their consultant. For the benefit of

managers, whether using consultants or in-house staff, a checklist for judging

the quality of survey research is given at Appendix 2.

A brief review of the tendering process was undertaken within those five

agencies that had contracted out their customer satisfaction survey. In all

cases, the agencies had:

� sourced their consultants in accordance with State Supply Commission

purchasing policies; and

� made payments in accordance with the agreed contract terms.

Most agencies chose to initially test a consultant on a short-term contract

before committing to a longer-term contract. However, a series of one-off

annual contracts may result in higher costs for the agency, and a lower quality

of service over time. Agencies should therefore consider forming a somewhat

longer- term contract with their consultant to get the best value for money.

This approach, if applied appropriately, is consistent with State Supply

Commission purchasing policies.

Page 18: LISTEN AND LEARN - Office of the Auditor General...LISTEN AND LEARN 2 Executive Summary This performance examination appraised the rigour with which public sector agencies had conducted

Using customer surveys to report performancein the Western Australian public sector

13

Survey Costs and Utilisation

Utilising Survey Findings

Public sector agencies are increasingly turning to customer surveys as a tool

to help them become more responsive and effective in serving their customers.

The seven agencies examined were able to demonstrate, to varying degrees,

actions that they had taken as a result of feedback from customer surveys.

The level of Chief Executive support and the linking of surveys to other

service improvement strategies were important factors that assisted agency

utilisation of customer surveys.

DOLA in particular could demonstrate an integrated holistic approach to the

collection and utilisation of survey data. This approach is illustrated in Figure 3.

Figure 3: Utilising Customer Feedback

DOLA has an integrated organisational approach to the collection and utilisation of customer

feedback.Source: OAG & DOLA

� Training and education� general and accredited training� Process and system changes� Redesigned Job Description

Forms that now include customerservices competencies� Incentives/rewards for staff� Division/Branch awards� annual DOLA-wide awards� Staff pay linked to performance

� DOLA’s culture is becoming morefocused on customer service� Improved customer service

e.g. quality and timeliness� Improved efficiency and productivity� New products and services developed� Improved stakeholder consultation

An ongoing participative process

Customers and staff are asked to:� rate the quality of current services;� identify their own needs; and� identify their priorities for service

changes

Customer Service Charterand Standards Developed

Changes and Impact Market and CustomerResearch

Strategies ImplementedAssessment, Measurement

and Feedback

Management Action

� Strategic planning� set priorities� allocate resources� Monitor progress� Accountability reporting

Results from the customer researchare fed back to:� management� staff� DOLA’s Customer Council and� individual customers through

displays in DOLA’s offices.

Page 19: LISTEN AND LEARN - Office of the Auditor General...LISTEN AND LEARN 2 Executive Summary This performance examination appraised the rigour with which public sector agencies had conducted

Auditor General WA

LISTEN AND LEARN»»

14

Survey Costs and Utilisation

Other examples of service improvements noted during the examination

include:

� FCS became aware that Aboriginal people were under represented in

their customer perception survey. As a result they commissioned a series

of separate Aboriginal perception surveys that were conducted in a more

culturally appropriate manner.

� An area of concern identified in Transport’s survey was delays

experienced by customers when making boat registration payments.

Transport introduced a payment facility through Australia Post to address

this problem in June 1997. The new payment facility proved popular,

attracting over 40 per cent of all boat registrations in its first six months

of operation.

� Homeswest’s customer survey identified tenant dissatisfaction with time

delays in the completion of house maintenance. As a result, Homeswest

introduced new maintenance contracts in January 1998, which specified

shorter maintenance times according to the degree of urgency.

Homeswest tenants were advised of the new standards, and the contracts

provide for the payment of an incentive bonus to contractors for meeting

these new standards.

Recommendations

Agencies should:

�� ensure that they have access to sufficient survey knowledge to effectively

manage consultants when contracting out their surveys;

�� be aware of the possibility of over burdening their clients with requests

to participate in customer surveys. Agencies with common clients should

liaise with each other to avoid this situation; and

�� use their survey findings as a tool to assist in service improvement and

as a means of demonstrating accountability.

Page 20: LISTEN AND LEARN - Office of the Auditor General...LISTEN AND LEARN 2 Executive Summary This performance examination appraised the rigour with which public sector agencies had conducted

Using customer surveys to report performancein the Western Australian public sector

15

Survey Accuracy and Reporting

Conclusions

� Many agencies have made considerable effort to tackle this new and

difficult area of customer satisfaction surveying. However, all of the

surveys examined had some technical weaknesses, and several had many.

� Agencies that made greater use of customer feedback for performance

management generally also undertook better quality surveys.

� Agency surveys need not be technically perfect, but they do need to be

credible for their intended use.

� When reporting on performance, agencies should advise readers of any

significant limitations in the data presented.

Technical Accuracy of Customer Surveys

Performance reports are designed to help improve public programs, provide

accountability to the public, and guide decision makers on the allocation of

resources. If customer satisfaction surveys are to provide accurate and useful

information, they need to be properly conducted. This is illustrated in Figure 4.

Figure 4: The Importance of Survey Rigour

The level of reporting accountability, and quality of management decision making, is dependent

upon the underlying survey rigour.Source: OAG

Improved decision making andpublic services

Accurate and crediblesurvey findings

Enhanced accountability

validityt

reliabilityt

record keepingt

analysis and reportingt

sampling precision

response rates

unbiased sampling

data collection

u

u

u

u

p

Page 21: LISTEN AND LEARN - Office of the Auditor General...LISTEN AND LEARN 2 Executive Summary This performance examination appraised the rigour with which public sector agencies had conducted

Auditor General WA

LISTEN AND LEARN»»

16

Survey Accuracy and Reporting

Some of the more important technical issues that need to be addressed to

obtain accuracy when undertaking customer surveys include:

� Sampling Methods – in particular making sure the sample is randomly

selected, and the size is adequate, to increase the likelihood that the

sample accurately represents the population;

� Data Collection Methods – there is a trade-off between the quality of the

data obtained and the cost of different data collection methods;

� Response rates – low response rates can produce biased results;

� Sampling Precision – the difference between some true population value

and the estimate of this value obtained from surveying a sample of people;

� Validity – Is the survey measuring what it is supposed to?; and

� Reliability – Does the survey measure in a consistent manner?

Key technical data from the seven agency customer surveys is summarised

in Table 2.

Page 22: LISTEN AND LEARN - Office of the Auditor General...LISTEN AND LEARN 2 Executive Summary This performance examination appraised the rigour with which public sector agencies had conducted

Using customer surveys to report performancein the Western Australian public sector

17

Survey Accuracy and Reporting

Survey and Population Initial Response Sampling Validity (2) Reliability (3)

Agency Size Sample Rate Precision (1)

Names Size

Customer Perception 31 638 4090 25% ± 3% ü ü

Survey – FCS

Land Operations 253 253 11.5% ± 17% ü û

Survey – DOLA

Survey of Local 144 49 69% ± 15% ü û

Governments

– DOLG

Customer Perception 59 362 1818 42% ± 3.5% ü ü

Survey – Transport

National Tenant 34 188 2218 51% ± 3% ü ü

Satisfaction Survey

– Homeswest

Survey of Community 50 50 48% ± 14% û û

Organisations – OMI

Key PI Survey 316 940 413 36% ± 8% ü ü

– RIA

Table 2: Technical Quality of Customer Surveys

The technical quality of the surveys was variable. Common problems included: low response

rates; poor sampling precision; and not examining survey validity and reliability.

Notes:(1) Sampling precision as measured by the standard error of a proportion at the 95 per cent

confidence level with an assumed proportion of incidence in the population of 0.5.

(2) None of the agencies had tested their surveys for validity. Ticks are given for meeting the

most basic criterion of having face validity (appearing to be a sound measure of the

concept in question).

(3) None of the agencies had tested their surveys for reliability. Ticks are given to those

surveys with a larger number of survey respondents that are at less risk of producing

unreliable performance indicators.

(4) Population size is an estimate.Source: OAG and agencies

(4)

(4)

Page 23: LISTEN AND LEARN - Office of the Auditor General...LISTEN AND LEARN 2 Executive Summary This performance examination appraised the rigour with which public sector agencies had conducted

Auditor General WA

LISTEN AND LEARN»»

18

Survey Accuracy and Reporting

Sampling Methods

Customer surveys usually involve studying a sample of agency customers

with a view towards reaching conclusions about all of the agency’s customers.

Important sampling tasks to be considered include:

� Defining the customer group;

� Obtaining a complete and accurate customer list from which to select a

sample;

� Ensuring that each customer has a known non-zero probability of being

randomly selected to participate in the survey;

� Deciding whether to stratify the population in order to ensure that the

views of the full range of customer groups are accurately captured;

� Selecting the right sample size (refer appendix 3) – if the customer group

is small, say less than 200 people, it may be more appropriate to undertake

a census of the entire population.

FCS, Transport, and Homeswest were able to demonstrate that they had

taken a rigorous approach to developing a list of their customers, and then

taking a sample of people from this listing. This increases the likelihood

that their survey results will be based on a sample of individuals that

accurately matches the characteristics of their customer group.

Other agencies have experienced varying degrees of difficulty in taking a

sample of their customers, due mainly to factors such as:

� confusion concerning whether the sampling unit was an agency or

individuals within these same agencies; and

� the impossibility of constructing a customer database.

As an illustrative example of these difficulties, the customer group for the

OMI’s survey was defined as the peak community organisations that it has

the most dealings with on a regular basis. From a list of 27 organisations,

OMI provided a consultant with a list of 50 names, being individuals

associated with these 27 host organisations. The consultant then attempted

to undertake a census of the list of 50 names provided. This methodology

raises questions with regard to how OMI selected the 27 agencies and the 50

individuals to be surveyed, and whether these individuals are able to

accurately represent the complete range of views in the 27 host organisations.

Page 24: LISTEN AND LEARN - Office of the Auditor General...LISTEN AND LEARN 2 Executive Summary This performance examination appraised the rigour with which public sector agencies had conducted

Using customer surveys to report performancein the Western Australian public sector

19

Survey Accuracy and Reporting

Data Collection Methods

The nature of the data collection method to be used (i.e. mailed questionnaire,

face to face interviews, telephone interviews) will be determined by the

nature of the topics to be covered in the survey, time and cost considerations,

and the characteristics of the agency’s customers. For example, face to face

interviews are a good technique for minimising communication problems

with people from a non-English speaking background, but they are also quite

expensive. Mailed questionnaires are relatively inexpensive, but require

respondents to have well developed literacy skills.

FCS’s customers are interviewed by telephone if possible, with non-

respondents and non-telephone users being followed up with a mailed

questionnaire. FCS is aware of the potential for differing data collection

methods to produce differing survey results and has undertaken some analysis

to compare the ratings provided by clients via telephone interviews versus

mailed questionnaire. Interpreters are made available, if required, to assist

respondents from a non-English speaking background with the interview.

Homeswest collected half its survey data using face to face interviews and

the other half using mailed questionnaires. Homeswest advise that research

by public housing bodies interstate has shown that using these two different

data collection methods with rental tenants has only a minimal impact on

the survey’s results. However, the face to face interviews are not targeted at

any particular geographic area or any particular client group. Given that

Homeswest has experienced problems with response rates in the Kimberley

region and with Aboriginal people, it may be worth considering targeting

these two groups for face to face interviews exclusively.

Response Rates

A survey’s response rate is given by the percentage of sampled individuals

who actually complete the survey. For example, if 100 people are sent a

mailed questionnaire and 75 of them actually complete and return this

questionnaire, the survey’s response rate is 75 per cent. It is quite common

for individuals who do not respond to a survey to differ markedly from those

who do. The survey’s results can then be biased and misleading as a result

of this difference.

Page 25: LISTEN AND LEARN - Office of the Auditor General...LISTEN AND LEARN 2 Executive Summary This performance examination appraised the rigour with which public sector agencies had conducted

Auditor General WA

LISTEN AND LEARN»»

20

Survey Accuracy and Reporting

There is no absolute answer to the question ‘What is a minimum acceptable

response rate?’ It depends upon who is being surveyed and what methodology

is being used. As a basic guide, a response rate of at least 50 per cent is generally

considered adequate for analysis and reporting, while a response rate of 70 per

cent or more is considered to be very good. It should be borne in mind that this

is only an approximate guide and has no statistical basis. A demonstrated lack

of response bias is more important than a high response rate.4

Response rates for agency surveys varied from a high of 69 per cent for

DOLG, through to a low of 11.5 per cent for DOLA. Low response rates (less

than 50 per cent) were experienced by five of the seven agencies examined.

These low response rates suggest that the survey’s findings are likely to be

biased, but the exact direction and size of this bias cannot be known without

undertaking further analysis. Good survey practice suggests that such analysis

should be routinely conducted for all surveys.5 FCS was the only agency that

attempted to address this issue.

Agencies therefore need to do better in this aspect of their surveys, both by

raising their response rates and by routinely analysing the extent of bias in

their surveys arising from non-response.

Sampling Precision

Survey findings obtained from a sampled group of customers are subject to

a degree of sampling variability or error. That is, these findings may differ

from the figures that would have been obtained if a different sample had

been selected or if a census of the entire population had been undertaken. A

large sample is more likely than a small sample to produce results that

closely resemble those that would have been obtained from a census. The

difference between survey estimates and expected census results can be

measured statistically using the concept of a ‘standard error’.

4 Adapted from: Babbie, E. 1990, Survey Research Methods, Wadsworth Publishing, California.

5 See: United States General Accounting Office, 1993, Developing and Using Questionnaires,author, Washington, D.C.; and Groves, R. 1989, Survey Errors and Survey Costs, John Wiley,New York.

Page 26: LISTEN AND LEARN - Office of the Auditor General...LISTEN AND LEARN 2 Executive Summary This performance examination appraised the rigour with which public sector agencies had conducted

Using customer surveys to report performancein the Western Australian public sector

21

Survey Accuracy and Reporting

For example, if a survey found that 60 per cent of agency customers were

satisfied with an error of ± 20 per cent at a 90 per cent confidence level, we

could be 90 per cent certain that in the population of agency customers,

somewhere between 40–80 per cent of all customers were satisfied. In this

case there would be uncertainty as to whether an apparently large increase

in customer satisfaction from 35 per cent to 60 per cent over three years

really did reflect a ‘true improvement’ or simply ‘sampling variability’6 This

is illustrated in Figure 5.

Figure 5: The Effect of Sampling Variability

This figure illustrates how sampling variability can be mistaken for a true change in the

population, particularly for a short time series with a large sampling error.Source: OAG

For the purpose of analysing trends over time, particularly when measuring

small changes, agencies should consider obtaining samples large enough to

keep their sampling errors to less than ± 5 per cent with a high degree of

confidence. See Appendix 3 for a guide to selecting samples of an appropriate

size.

6 The above two paragraphs have been adapted from Australian Bureau of Statistics (Victoria),1993, An Introduction to Sample Surveys.

100

90

80

70

60

50

40

30

20

10

01994–95 1995–96 1996–97 1997–98

p

pp

lower limits ofsampling error

p

p

True situation: 50 per cent satisfied no trend

Apparent upward trend due to sampling variability

upper limits ofsampling error

Page 27: LISTEN AND LEARN - Office of the Auditor General...LISTEN AND LEARN 2 Executive Summary This performance examination appraised the rigour with which public sector agencies had conducted

Auditor General WA

LISTEN AND LEARN»»

22

Survey Accuracy and Reporting

Sampling precision ranged from a low of ± 3 per cent for FCS and Homeswest

(excellent) to a high of ± 17 per cent for DOLA. Of the seven agencies

examined, four had error rates greater than ± 5 per cent. Agencies with a

high level of error in their surveys were generally unaware of the significance

of sampling error. Improving the precision of agency surveys can be achieved

by taking a larger sample of agency clients, and reducing levels of non-

response.

Validity of Measurement

Validity concerns the extent to which a survey is really measuring what it is

supposed to measure. An example of a lack of measurement validity would

be to give an Australian I.Q. test to a non-English speaking migrant. The

test results will be a reflection of their limited familiarity with English, not

their level of intelligence.

Developing a valid survey for performance reporting purposes requires agency

objectives that are clear, specific and measurable.

Establishing a survey’s validity is an important matter, and should be a

standard part of planning and developing a survey. If it is not done it can

cause systematic survey errors that lead to results that are consistently

biased in one direction.

None of the seven agencies formally assessed the validity of their surveys.

The most basic approach for establishing the validity of a survey is to ask a

knowledgable person, ‘Does this survey question appear to be a good measure

of this concept or outcome objective? Does it look right, does it make

sense?’. In the absence of formal validity testing and assessment by the

seven agencies, the examination applied this most basic of tests to determine

the face validity of the seven surveys. This test raised issues regarding two

of the surveys examined.

The relevant effectiveness indicators addressed by FCS’s Customer Perception

survey are: ‘Proportion of family and community support customers who

have increased their knowledge and skills; and extent to which family and

community support customers develop their own solutions and

Page 28: LISTEN AND LEARN - Office of the Auditor General...LISTEN AND LEARN 2 Executive Summary This performance examination appraised the rigour with which public sector agencies had conducted

Using customer surveys to report performancein the Western Australian public sector

23

Survey Accuracy and Reporting

independence’. The relevant questions in FCS’s survey are ‘On your last

contact, how much did you learn that was useful to you?’; and ‘How confident

are you that you will be able to handle a similar situation in the future?’.

FCS realises that these two questions are likely to only be a partial measure

of the relevant indicators, and they have managed this issue by reporting a

suite of indicators from several different surveys to counter the limitations

of using a single survey question in isolation.

OMI’s 1997 objective was “to promote a harmonious community and equitable

access for people from diverse cultural, linguistic and religious backgrounds

by assisting the State Government in the development of its policies and by

supporting community initiatives”. This objective is not clear, specific and

measurable, making it difficult to develop valid survey questions.

The OMI objective could suggest that their customers are individual persons

in the community. However, OMI view their customers as being a number of

peak community organisations but not individual persons. The validity of

OMI’s performance indicators would therefore be enhanced if its objective

stated the outcomes OMI is seeking to achieve in relation to these

organisations.

OMI’s survey asks selected representatives from these peak community

organisations to rate the impact of OMI’s activities on the wider community.

Given that these representatives are unlikely to have adequate data on this

impact, and that OMI acts as a resource centre for these peak community

organisations (the respondent’s host organisation), this raises further doubts

as to the validity of the survey.

Reliability of Measurement

Reliability concerns the extent to which a survey would give similar results

if it were given more than once to the same group of people. Reliability is a

function of random measurement error in the data. Determining a survey’s

reliability is another basic part of survey planning and development.

An example of reliability is weighing oneself on the bathroom scales. If you

got on your bathroom scale and it read 70 kilograms, you got off and on

again and it read 75 kg, repeated the process and it read 65 kg, your scale

Page 29: LISTEN AND LEARN - Office of the Auditor General...LISTEN AND LEARN 2 Executive Summary This performance examination appraised the rigour with which public sector agencies had conducted

Auditor General WA

LISTEN AND LEARN»»

24

Survey Accuracy and Reporting

would not be a very reliable measure. Thus a lack of survey reliability is of

concern as it leads to inconsistent averaged findings, particularly for groups

with fewer than 60–80 respondents.

None of the seven agencies attempted to assess the reliability of their survey,

which is of concern for the agencies with a small number of respondents

(i.e. DOLA, DOLG, and OMI). This means that the performance indicator

figures reported for these agencies are likely to contain significant amounts

of random measurement error, which means that their findings are more

likely to be inaccurate.

Survey Reporting and Record Keeping

In addition to technical rigour, two further issues of importance are:

� Standard of reporting – clearly presenting the survey’s results and

indicating any significant limitations in the methodology; and

� Record keeping – the availability of supporting documentation.

Standard of Reporting

Survey data should be objectively analysed using standard scientific methods.

Agencies need to explain their results, particularly any unusual or unexpected

findings. Results should be interpreted with the appropriate level of precision,

and expressed with the proper degree of caution about the conclusions drawn.

Performance data needs to be presented in a manner and form that enables

agency staff and external audiences to assess the current level of performance

and whether it is improving or worsening, and to what extent. Charts and

graphs present findings clearly, but should include a description of how the

survey was conducted, explanatory notes about the findings, and a discussion

of the limitations of the survey’s methodology.

Page 30: LISTEN AND LEARN - Office of the Auditor General...LISTEN AND LEARN 2 Executive Summary This performance examination appraised the rigour with which public sector agencies had conducted

Using customer surveys to report performancein the Western Australian public sector

25

Survey Accuracy and Reporting

A related issue is the matter of independence of data collection and analysis.

The use of an independent consultant to conduct an agency’s customer survey

adds to the perceived impartiality of the survey, and also encourages

customers to respond to the survey in a frank manner. This is especially

relevant for those agencies that:

� regulate or fund their customers; or

� provide services upon which their customers are highly dependent.

All of the seven agencies examined analysed their survey data using simple

descriptive statistics such as percentages or average scores. Most agencies

then presented their findings in a simple tabular format (FCS, DOLA,

Transport, OMI, and RIA), with or without supporting explanatory notes.

Homeswest made use of coloured graphics to present their findings, while

DOLA relied on lengthy written descriptions of their findings.

Several of the agencies examined made errors in the analysis and/or

interpretation of their performance indicators based upon survey findings

(DOLA, DOLG, Homeswest, and OMI). In particular, agencies had problems

in interpreting changes over time in satisfaction ratings due to ignoring the

impact of sampling precision. This seems to be a problem of agency expertise,

rather than a matter of deliberate misrepresentation.

Many agencies made no attempt to interpret the meaning of their performance

indicator data, simply presenting the data and leaving the reader to make

their own conclusions about agency performance (FCS, Transport, OMI and

RIA). There was a general lack of explanatory notes to inform the reader

about any limitations in the methodology used for the survey. No agency

reported their findings along with the relevant standard error figure.

The reported performance indicators often contained little additional

information to assist readers assess agency performance. For example they

did not include a standard or a benchmark against which to compare agency

performance. They also did not generally include explanatory notes to inform

the reader about the survey’s technical limitations.

Agencies such as FCS and Transport were well aware of the importance of

the independence of survey data collection and analysis, while other agencies

such as DOLG need to further consider this aspect of their customer survey.

Page 31: LISTEN AND LEARN - Office of the Auditor General...LISTEN AND LEARN 2 Executive Summary This performance examination appraised the rigour with which public sector agencies had conducted

Auditor General WA

LISTEN AND LEARN»»

26

Survey Accuracy and Reporting

Record Keeping

Agencies and/or their consultant were able to provide the examination team

with copies of all requested documents, generally with minimal delay. DOLA

was noteworthy for using a consultant to check on their audit trail as part of

a review of their customer survey process.

Agencies were generally unsure how long the paper copies of survey forms

should be kept for audit purposes, although the Market Research Society of

Australia recommends hard copies be kept two years. Managers were unaware

as to whether these records were covered by a Retention and Disposal Schedule

under the Library Board Act of Western Australia 1951. This Schedule is the

means by which agencies can legitimately dispose of public records or transfer

records of permanent evidential value to archives.7

Recommendations

Agencies should:

�� ensure that their surveys are conducted in a scientifically rigorous manner

so as to minimise all types of survey error;

�� present their performance indicators in conjunction with relevant

supporting information such as comparative benchmarks and the survey’s

technical limitations; and,

�� assess the cost-effectiveness of undertaking customer surveys,

particularly those agencies with small budgets and a large number of

customers.

7 Refer to OAG Report No 6 of 1996 title ‘For the Public Record’ for further information aboutpublic sector records management.

Page 32: LISTEN AND LEARN - Office of the Auditor General...LISTEN AND LEARN 2 Executive Summary This performance examination appraised the rigour with which public sector agencies had conducted

Using customer surveys to report performancein the Western Australian public sector

27

Appendix 1: Key Steps inConducting Customer Surveys

Designing, implementing, and utilising customer surveys is best thought of

as an ongoing iterative process characterised by the following key steps:8

1. Determine who the agency’s customers are

� Identify customers for all relevant agency products and services,

e.g. internal and external clients, direct and indirect clients

2. Determine agency objectives for seeking customer feedback

� Determine why customer feedback information is required, e.g. to

assist with the planning of a new service, to further improve current

services, for accountability purposes, etc.

� Identify specific information needs in relation to each purpose, and

in relation to each client group

� Clarify how the agency will use the survey information once it has

been obtained

3. Develop the agency’s measurement strategy

� Develop institutional structures and plans to clarify who will have

responsibility for managing the customer survey, who will decide

upon the utilisation of any findings, and who will then implement

these actions

� Clarify how frequently the agency will need to obtain feedback from

its customers, e.g. on an ongoing basis versus periodically

� Identify the most appropriate methodology for data collection given

the agency’s information needs, budgetary resources, and the

characteristics of the client group

� Calculate an appropriate sample size for the survey based upon the

degree of precision required in the survey’s findings

� Consider obtaining input from technical specialists to assist with

the development and pilot testing of the survey

8 Adapted from: Treasury Board of Canada, 1992, Measuring Client Satisfaction.

Page 33: LISTEN AND LEARN - Office of the Auditor General...LISTEN AND LEARN 2 Executive Summary This performance examination appraised the rigour with which public sector agencies had conducted

Auditor General WA

LISTEN AND LEARN»»

28

Appendix 1: Key Steps in Conducting Customer Surveys

4. Gather and analyse information

� Adopt standard scientifically valid methodologies to minimise errors,

ethical concerns, and other potential problems

� Rigorously analyse the data and make appropriate comparisons, e.g.

over time, over geographical areas, over different customer groups,

against service standards, against industry benchmarks

� Interpret the survey’s results with the appropriate level of precision,

and express the proper degree of caution about the conclusions that

can be drawn from the results

� Attempt to explain any unexpected or unusual results

� Document procedures followed in the course of the survey. Edit and

archive the data/findings to allow independent confirmation of the

results

5. Report and utilise the survey’s results

� Ensure that published findings are consistent with the survey’s results

� Share the results with service delivery staff, agency managers, and

customer groups

� Set agency priorities, and develop strategies to address service areas

in need of improvement

� Set service standards that are both challenging and realistic

� Establish monitoring procedures to assess performance

improvements over time

6. Review agency’s measurement practices

� Are internal agency lines of responsibility and accountability clear?

� Are agency practices technically sound?

� Is the agency’s survey producing useful information in a cost-

effective manner?

Page 34: LISTEN AND LEARN - Office of the Auditor General...LISTEN AND LEARN 2 Executive Summary This performance examination appraised the rigour with which public sector agencies had conducted

Using customer surveys to report performancein the Western Australian public sector

29

Appendix 2: Judging the Qualityof Survey Research: A Checklist

1. Does the survey report contain a list of specific issues or questions the

survey is intended to address?

2. Do the research questions posed by the investigators appropriately and

adequately address the topic of the survey?

3. Are the research questions posed by the investigators well organised

and well structured?

4. Does the report identify the target population to which generalisation

was desired?

5. Does the report describe the sampling frame used and the rationale for

its use?

6. Does the report indicate a close match between the target population

and the operational population?

7. Does the report describe the sampling procedure used? Were probability

procedures used?

8. Are non-response rates reported for the entire survey, and for individual

questions?

9. Were non-response rates low enough to avoid substantial bias errors?

10. Are any analyses of potential sampling bias reported (including patterns

of non-response)?

11. Are sample sizes sufficient to avoid substantial sampling error? Are

standard errors of estimate reported?

12. Is the primary mode of data collection (mail questionnaire, face to face

interviews, telephone interviews) consistent with the objectives,

complexity, and operational population of the survey?

13. Is a copy of the survey instrument provided in the survey report?

14. Was the instrument thoroughly pre-tested as a part of its development?

15. Are instructions for completing the survey clear and unambiguous?

16. Are questions in the instrument clear and unambiguous?

17. Do questions in the instrument encourage the respondent’s honesty in

admitting a lack of knowledge or uncertainty?

Page 35: LISTEN AND LEARN - Office of the Auditor General...LISTEN AND LEARN 2 Executive Summary This performance examination appraised the rigour with which public sector agencies had conducted

Auditor General WA

LISTEN AND LEARN»»

30

Appendix 2: Judging the Quality of Survey Research: A Checklist

18. Are questions in the instrument free from obvious bias, slanting or

loading?

19. Are the survey questions ordered appropriately?

20. Was the survey consistent with ethical research practices? Have the

issues of anonymity and confidentiality been handled adequately?

21. Does the survey report contain a description of field and data management

procedures?

22. Were these field/data management procedures adequate and appropriate?

Is it likely that major sources of bias error have been avoided?

23. Are the data analyses clearly described?

24. Are data analyses appropriate to the purposes of the survey? Were all

relevant statistical assumptions satisfied?

25. Did the survey provide answers to the research questions posed by the

investigators?

26. Are the researcher’s conclusions sound, or are alternative interpretations

of the findings equally plausible?

27. Does the survey report contain a description of any deviations from the

survey’s implementation plan, and the likely impact of these deviations?

28. Does the survey report contain an analysis of the quality of the survey,

including its reliability and validity?9

9 See: Jaeger, R. 1988, Complementary Methods for Research in Education, American EducationalResearch Association, Washington DC; and Babbie, E. 1990, Survey Research Methods,Wadsworth, California.

Page 36: LISTEN AND LEARN - Office of the Auditor General...LISTEN AND LEARN 2 Executive Summary This performance examination appraised the rigour with which public sector agencies had conducted

Using customer surveys to report performancein the Western Australian public sector

31

Appendix 3: A Guide to SampleSize Requirements10

Required sample sizes at the 95 per cent confidence level to yield a given

level of sampling error (standard error of a proportion), assuming a proportion

of incidence in the population of 0.5.

Required final sample size for an error of:

± 3% ± 5% ± 10%

Population Size

50 48 45 33

100 92 80 49

250 203 152 70

500 341 217 81

750 441 254 85

1 000 516 278 88

2 500 748 333 93

5 000 880 357 94

10 000 964 370 95

25 000 1 023 378 96

50 000 1 045 381 96

100 000 1 056 383 96

1 000 000 1 066 384 96

Infinitely large 1 067 385 96

Note: The figures presented in this table refer to the number of completed,

useable questionnaires that the agency needs to get back, not the starting

sample size. For example, if 278 respondents are required from a population

of 1 000 to yield a precision of ± 5 per cent and the expected response rate is

60 per cent, the starting sample size would be 464 people (i.e. 278 x 100/60).

10 Adapted from: Salant, P. and Dillman, D. 1994, How to Conduct Your Own Surveys, John Wiley& Sons, New York.

Page 37: LISTEN AND LEARN - Office of the Auditor General...LISTEN AND LEARN 2 Executive Summary This performance examination appraised the rigour with which public sector agencies had conducted

Auditor General WA

LISTEN AND LEARN»»

32

Performance Examination Reports

Tabled

1996

Improving Road Safety May 1, 1996

The Internet and Public Sector Agencies June 19, 1996

Under Wraps! – Performance Indicators of Western Australian Hospitals August 28, 1996

Guarding the Gate – Physical Access Security Management within the

Western Australian Public Sector September 24, 1996

For the Public Record – Managing the Public Sector’s Records October 16, 1996

Learning the Lessons – Financial Management in Government Schools October 30, 1996

Order in the Court – Management of the Magistrates’ Court November 12, 1996

1997

On Display – Public Exhibitions at: The Perth Zoo, The WA Museum and

The Art Gallery of WA April 9, 1997

Bus Reform – Competition Reform of Transperth Bus Services June 25, 1997

Get Better Soon – The Management of Sickness Absence in the WA Public Sector August 27, 1997

Waiting for Justice – Bail and Prisoners in Remand October 15, 1997

Public Sector Performance Report 1997 November 13, 1997

Private Care for Public Patients – The Joondalup Health Campus November 25, 1997

1998

Selecting the Right Gear – The Funding Facility for the Western Australian

Government’s Light Vehicle Fleet May 20, 1998

Weighing up the Marketplace – The Ministry of Fair Trading June 17, 1998

On request these reports may be made available in an alternate format for those with visual impairment.