introduction - department of industry, innovation and … · web viewthis evaluation strategy...

38
EVALUATION STRATEGY 2015-2019 INDUSTRY.GOV.AU

Upload: lecong

Post on 02-Apr-2018

215 views

Category:

Documents


2 download

TRANSCRIPT

EVALUATION STRATEGY2015-2019

INDUSTRY.GOV.AU

Further information

For information on other department initiatives please see the department’s website at: www.industry.gov.au/OCE

For more information or to comment on this publication please contact:

Martine Rodgers

Evaluation Unit

Department of Industry and Science

GPO Box 9839

CANBERRA ACT 2601

Telephone: +61 2 6213 7194

The views expressed in this publication are those of the authors and do not necessarily reflect those of the Australian Government or the Department of Industry and Science.

© Commonwealth of Australia 2015

ISBN: 978-1-925092-67-7 (print)

ISBN: 978-1-925092-68-4 (online)

This work is copyright. Apart from use under Copyright Act 1968, no part may be reproduced or altered by any process without prior written permission from the Australian Government. Requests and inquiries concerning reproduction and rights should be addressed to [email protected]. For more information on Office of the Chief Economist research papers please access the department’s website at: www.industry.gov.au/OCE.

Creative Commons Licence

With the exception of the Coat of Arms, this publication is licensed under a Creative Commons Attribution 3.0 Australia Licence.

Creative Commons Attribution 3.0 Australia Licence is a standard form license agreement that allows you to copy, distribute, transmit and adapt this publication provided that you attribute the work. A summary of the licence terms is available from http://creativecommons.org/licenses/by/3.0/au/deed.en. The full licence terms are available from http://creativecommons.org/licenses/by/3.0/au/legalcode.

The Commonwealth’s preference is that you attribute this publication (and any material sourced from it) using the following wording:

Source: Licensed from the Commonwealth of Australia under a Creative Commons Attribution 3.0 Australia Licence. The Commonwealth of Australia does not necessarily endorse the content of this publication.

Contents

Introduction 1

Performance measurement and reporting 2

Impact on evaluation activity 3

What is evaluation? 4

Good evaluation practices – a consistent approach to evaluation 5

Governance 6

Executive Board 6

Steering Committees and Reference Groups 6

Evaluation and Audit—what is the difference? 6

Assurance and Audit Committee 6

Internal Audit role 7

Evaluation Unit role 7

Planning for evaluation across the policy and programme lifecycle 8

Evaluation types 8

Evaluation readiness 9

Prioritising evaluation effort 10

A four-year evaluation plan 10

Alterations to the evaluation plan 10

Scaling evaluation 11

Responsibility for conducting evaluations 11

Evaluation approach 12

Lessons learned — taking advantage of completed evaluations 13

Building capacity and capability 14

Fostering a culture of evaluative thinking 14

Building evaluation capability 14

Supporting guidance material 15

Measures of success 16

Evaluation maturity 16

Reviewing the Evaluation Strategy 18

APPENDIX A – Glossary 19

APPENDIX B – Departmental guidance materials 21

Introduction

This Evaluation Strategy provides a framework to guide the consistent, robust and transparent evaluation and performance measurement of programmes and policies in the department.

Evaluations, reviews and performance monitoring provide assurance that policies and programmes are delivering outcomes as intended, performance is tracked—allowing for correction to occur—and inform future policy and programme design. As Australia is called to adapt to changing economic and policy environments, the evidence gained from evaluations and other forms of performance measurement and assessment support the decision-making of government.

For government, and this department, the continual questioning of how we are performing is a critical part of good performance management and accountability. We need to know:

Have we achieved what we set out to do?

How are we progressing in achieving the department’s strategic objectives?

Could we have done things better?

Should we continue to do this or do something else?

Through asking these types of questions we gain an understanding of what works and what doesn’t work and why, what is being done well and what is not, what should be pursued and what should not. This knowledge can improve the design and implementation of effective interventions.

The Public Governance, Performance and Accountability Act 2013 (PGPA Act) establishes a core set of obligations that apply to all Commonwealth entities. The Enhanced Commonwealth Performance Framework brings an increase in external scrutiny, and introduces new requirements for strategic planning, measuring and assessing performance, evaluation and reporting.

Reflecting the department’s response to the PGPA Act, the principles outlined in this Evaluation Strategy (the Strategy) will strengthen evaluation and performance measurement capacity in the department and support building a culture of evaluative thinking, ultimately leading to better resource allocation and decision-making and the evolution of programmes.

This Strategy:

Outlines the department’s approach to performance measurement and reporting, according to good evaluation practice

Establishes a protocol for policy and programme areas to plan for evaluation across the lifecycle of a programme

Introduces a strategic, risk-based, whole-of-department approach to prioritising evaluation effort, and illustrates how evaluations may be scaled based on the value, impact and risk profile of a programme

Describes how evaluation findings can be used for better decision-making

Describes how the department is building evaluation capability and a culture of continuous improvement

Outlines how the department will measure its progress in implementing this Strategy.

This Strategy is not intended to be a complete guide to evaluation and performance measurement. It is supported by a range of internal and external resources including:

the department’s comprehensive guidance material and templates for planning and conducting an evaluation

the department’s Performance Measurement and Reporting Framework

the Department of Finance Enhanced Commonwealth Performance Framework

1

the Australian National Audit Office Best Practice Guide—Successful Implementation of Policy Initiatives.

2

Performance measurement and reporting

The department’s performance measurement and reporting framework supports the implementation of the Enhanced Commonwealth Performance Framework (the Framework) under the PGPA Act.1

The Framework enables Commonwealth entities to devise the necessary links between their performance information and their external reporting. Entities are encouraged to adopt performance measurement methodologies that better assess the results of activities and articulate their performance story. The framework introduces a more transparent and cohesive form of performance reporting related to the activities of an entity in achieving its purpose.

Under the PGPA Act Commonwealth entities will be required to produce and publish annually:

A Corporate Plan, which sets out the purpose of the entity and the method of measuring and assessing the entity’s performance in achieving its purpose.

An Annual Performance Statement, which measures and assesses the entity’s performance in achieving its purpose in the reporting period. This statement is published as part of the Annual Report.

There is a strong emphasis on performance monitoring, evaluation and reporting. Further, the Framework establishes a clear cycle of planning, measuring, evaluation and reporting of results to Parliament, ministers and the public. The revised reporting framework allows flexibility in using various data sources to better assess the results of government programmes. Performance measures are tailored specifically for each programme so that they reflect its design, including its specified objectives, inputs, outputs and outcomes.

Traditional Key Performance Indicators (KPIs) can be complemented with other measures which provide a more meaningful link between public resources used and the results delivered. These may include benchmarking, surveys, peer reviews, and comprehensive evaluations.

Given the department’s high level of complexity, a hierarchy has been adopted to measure and report on performance. Performance measures are identified and reported on at the level of the ‘Activity’, ‘Strategic Objective’ and ‘Entity’, reflecting the outputs, impacts and outcomes of the department’s activities. This approach means that performance monitoring and assessment is undertaken at the appropriate levels, where useful performance measures can be developed and quality data is available.

1 Information in this section is based on guidance available as at 5 June 2015. Department of Finance (2015), Enhanced Commonwealth Performance Framework, Public Management Reform Agenda, webpage viewed 5 June 2015, https://cfar.govspace.gov.au/legislation-pgpa-act/#transitional.3

Figure 1: Reporting hierarchy

The department sets out its vision and four strategic objectives in the Strategic Plan 2015-19.

Impact on evaluation activity

Good performance information will draw on multiple sources that offer different perspectives on the achievement of a programme’s objectives. The performance story of a programme is likely to be best supported through a diverse set of measures. 

Evaluations provide a balanced performance story through their incorporation of programme logic tools, and assessment against set criteria. They provide meaningful information and evidence on a component’s aim and purpose in terms of its effectiveness and efficiency and the activities that focussed on that purpose. They provide an opportunity to look beyond performance monitoring and reporting and consider how well the programme is achieving its outcomes.

The department responds to growing demand for evidence-based analyses of policy and programme impacts by applying robust research and analytical methods, both quantitative and qualitative, to determine and isolate what works in industry and science policies and programmes.

4

█ Vision (outcomes)Enabling growth and productivity for globally competitive industries

█ Component (output)Entrepreneurs’ Infrastructure ProgrammeCooperative Research Centres

█ Activity (intermediate output)Business research, development and commercialisation

Economic transition

█ Strategic objectives/purposes (impact)Supporting science and commercialisationGrowing business investment and improving capability

What is evaluation?Evaluation is an essential part of policy development and programme management. The continual questioning of what we are trying to achieve and how we are performing enables us to learn and improve what we do, ensuring that decision-making is informed by the best available evidence.

Policy and programme evaluation involves collecting, analysing, interpreting and communicating information about the performance of government policies and programmes, in order to inform decision-making and support the evolution of programmes.

Evaluation helps to answer questions such as:

Is the policy contributing to the intended outcomes or any unintended outcomes?

Are there better ways of achieving these outcomes?

What has been the impact of the programme?

Is the policy still aligned with government priorities, particularly in light of changing circumstances?

Should the current programme be expanded, contracted or discontinued?

Is there a case to establish new programmes?

Can resources be allocated more efficiently by modifying a programme or mix of programmes?2

Evaluation is integral to continual improvement. It is a not a one-off, or ‘tick the box’ exercise. Evaluation supports:

Evidence-basedPolicy Development

evidence-based policy development and decision-making

a stronger basis for informing government priorities

more efficient resource allocation.

PublicAccountability

the public accountability requirements of programme sponsors and governments

the department’s risk-management processes, helping to encourage greater public trust in government.

Learning shared learning to improve policy development and programme design and delivery

a culture of organisational learning within the department.

PerformanceReporting

the analysis and assessment of balanced and meaningful performance information to report on progress in achieving strategic outcomes

an enhanced ability to achieve government priorities.

2 Davis G & Bridgman P (2004), Australian Policy Handbook, Allen & Unwin, Sydney, pp.130-131.5

Good evaluation practices – a consistent approach to evaluation

If evaluations are to be valuable to decision-makers across government, consistency in approach and planning are required. Evaluations should be conducted to a standard that ensures the information is credible and evidence-based.

The table below outlines the key principles used to guide evaluation in this department.3

Evaluation Principles:

1.

Integrated Evaluation is core business for the department and is not simply a compliance activity

Evaluation planning is undertaken at the New Policy Proposal Stage or early in the design of programmes

Evaluation results are communicated widely and inform decision-making and policy development.

2.

Fit for purpose The scale of effort and resources allocated to an evaluation are proportional to the value, impact, strategic importance and risk profile of a programme

The evaluation method is selected according to the programme lifecycle, feasibility of the method, availability of data and value for money.

3.

Evidence-based The department applies robust research and analytical methods to assess impact and outcomes

Collectors of administrative data strive to attain baseline measurements and trend data in forms that are relatable to external data sets.

4.

Specific and timely Evaluation planning is guided by the timing of critical decisions to ensure sufficient bodies of evidence are available when needed.

5.

Transparent All evaluation reports are communicated internally unless there are strong reasons to limit circulation

The department will move towards publishing more content externally to strengthen public confidence and support public debate.

6.

Independent Evaluation governance bodies have some independence from the responsible policy and programme areas

Typically, evaluators will have some independence from the responsible programme and policy areas.

3 Adapted from Department of the Environment (2015), Evaluation Policy, Canberra, p.7.6

Governance

Executive Board

The Executive Board is responsible for oversight of the department’s evaluation activity, including:

Endorsing the department’s annual evaluation plan

Agreeing the evaluations to be listed and reported on in public documents, including the Corporate Plan and Annual Performance Statement

Observing progress against the annual evaluation plan

Noting emerging themes from completed evaluations and audits to help inform future policy and programme design

Observing information sharing between the Audit Committee and Evaluation Unit regarding evaluation activity within the department, including the implementation of major recommendations

Considering whether to publish evaluation reports or summaries to support greater transparency of the department’s performance and dissemination of learning as noted in the National Commission of Audit.

Steering Committees and Reference Groups

Evaluations conducted by the department are overseen by a Steering Committee or Reference Group. A Steering Committee is more appropriate for larger or more complex evaluations, involving multiple stakeholder groups or layers of government. Steering Committees increase transparency, cross-fertilisation and independence. A Steering Committee gives direction to the team conducting the evaluation and ensures the evaluation is of an acceptable quality and is independent. While it is beneficial to have the knowledge of policy and programme area representatives, there should be a level of independence in the membership of Steering Committees.

A Reference Group differs from a Steering Committee in that it is a sounding board for ideas, but does not formally approve or direct the evaluation. The Chair of a Reference Group should be independent from the policy and programme areas.

Evaluation and Audit—what is the difference?

The roles of evaluators and auditors are quite different but they have similar outputs—both inform performance reporting and address public accountability requirements. There are strategic linkages and synergies between the two functions. When planning an evaluation or audit, and identifying lessons learned to inform policy development and programme delivery, evaluation and audit activity should be considered and coordinated. The responsibilities of the Assurance and Audit Committee and the role of Internal Audit and the Evaluation Unit are outlined below to help identify who does what.

Assurance and Audit Committee

The Assurance and Audit Committee—established in accordance with the PGPA Act—provides independent advice and assurance to the Executive on the appropriateness of the department’s accountability and control framework, independently verifying and safeguarding the integrity of the department’s financial and performance reporting.

The Annual Audit Plan provides an overview of the delivery of Internal Audit services, which include General audits, ICT audits, Management Initiated Reviews and Assurance Advisory Services.

7

Internal Audit role

Internal audit provides an independent and objective assurance and advisory service to:

provide assurance to the Secretary that the department's financial and operational controls designed to manage the organisation's risks and achieve the department's objectives are operating in an efficient, effective and ethical manner

assist the Executive and senior managers in the effective discharge of their responsibilities

improve the effectiveness of risk management, control and governance including business performance

advise the Assurance and Audit Committee regarding the efficient, effective and ethical operation of the department.

Evaluation Unit role

The Evaluation Unit are the authoritative source of advice on evaluation. The Unit’s role is to help build capability in performance measurement and evaluation, and promote a culture of evaluative thinking and continual improvement across the department. The Unit has responsibility for developing and reporting progress against the department’s Evaluation Plan.

The Evaluation Unit are available to provide expert advice and guidance to divisions in planning and conducting evaluations. They conduct evaluations on behalf of divisions as directed by the Executive Board and can provide expertise as members of Steering Committees and Reference Groups.

The Evaluation Unit is responsible for…

Providing expert advice and guidance to line areas in planning and conducting evaluations

Strengthening line areas’ capability to conduct evaluations

Developing new tools to support line areas and increase the rigour of evaluations across the department

Conducting or participating in evaluations of selected high-priority evaluations

Providing advice as members of Steering Committees and Reference Groups

Promoting evaluation thinking across the department and sharing report findings

Developing the department’s four-year Evaluation Plan and reporting progress against the Plan to the Executive Board.

The responsibilities of the Evaluation Unit are further described on pages 12 and 15.

8

Planning for evaluation across the policy and programme lifecycle

The decision to conduct an evaluation is strategic rather than simply routine. Decision-makers need to give considerable thought to what they want an evaluation to address and when an evaluation should occur.

Evaluation activity has different purposes at different points in the programme lifecycle. All policy and programme areas need to consider evaluation requirements from the early policy and programme design stage (ideally this occurs at the New Policy Proposal stage and in conjunction with the Evaluation Unit). Planning for evaluation at this early stage helps to identify what questions need to be answered and when, so that meaningful data can be collected to measure the programme’s outcomes and impact.

Evaluation types

The department uses four key types of evaluation, grouping them under the categories of formative (as the initiative is getting underway) and summative (once results can be observed).4

Formative

Prospective evaluation—ensures that the programme is prepared for robust evaluations from the outset using a programme logic model to refine the proposal and determine performance measures and data sources.5

Post-commencement—also known as a post implementation review, these evaluations ‘check in’ soon after the programme has begun. This type of evaluation focuses on initial implementation, allowing decision-makers to identify early issues regarding programme administration and delivery and take corrective action.

Summative

Monitoring—these evaluations draw on performance information to monitor the programme’s progress. They are usually suited to programmes which are at a “business as usual” patch in the programme lifecycle, and the primary audience are people internal to the programme. The findings of a monitoring evaluation provide an indication of performance, contributing to the measurement of the department’s strategic objectives; a basis for future reviews; and an opportunity to test the programme’s data sources to see whether they are providing the required performance information.

Impact—Impact evaluations are usually larger and more complex evaluations which allow for assessment of a programme’s performance. Where possible they would test this against a ‘counterfactual’: they seek to compare programme outcomes with a prediction of what would have happened in the absence of the programme, and may include research about programme alternatives to allow comparison of results. They may involve a cost-effectiveness or cost-benefit analysis.

This type of evaluation tends to have more of an external focus—the primary audience are people external to the programme. Impact evaluations usually aspire to find objectively verifiable results, contributing to the measurement of the department’s strategic objectives and outcome.

4 Refer to glossary at Appendix A.5 United States General Accounting Office (1990), Prospective Evaluation Methods, Washington D.C.9

Figure 2: Evaluation stages in the programme and policy lifecycle

Programme logic is used at the policy and programme development stage to clarify the intent of a programme and assist in the development of performance measures, key evaluation questions and data sources, including establishing baseline data.

One year into a programme a post-commencement evaluation can be conducted to examine whether the programme is operating as intended.

At the 12 – 18 month mark the data collection and performance measures of a programme can be tested through a monitoring evaluation, to see if the indicators and data are presenting sufficient information to assess effectiveness and impact of the programme. This would also give an assessment of performance against the short term performance indicators.

Depending on the nature of the programme, after three to five years an impact evaluation can be conducted using an appropriate methodology.

Evaluation readiness

To help policy and programme areas to plan for evaluation and performance monitoring, all programme areas will complete a Performance Measurement and Reporting Template, or similar documentation. The information will inform the policy or programme Evaluation Lifecycle. Completing these documents and instilling a performance-measurement mindset will ensure the department’s programmes are ‘evaluation ready’, i.e. key evaluation questions have been identified, and there is enough evidence and data to enable an evaluation to be conducted.

10

ImpactEvaluations

MonitoringEvaluations

Post-CommencementEvaluation

EvaluationPlanning

Programme Logic

Policy development &Prospectiveevaluation

Planning Implementation Post Project

Evidence from evaluations informs policy and programme design beyond the programme being evaluated.

Prioritising evaluation effort

A four-year evaluation plan

The department has a strategic, risk-based, whole-of-department approach to prioritising evaluation effort to provide greater oversight in the allocation of resources to evaluation activity across the department.

As required under the PGPA Act, the department’s Evaluation Plan covers a four-year period (over the forward estimates). Elements of the Plan will be published externally by the department, including in the Corporate Plan and Annual Performance Statement.

While noting the many benefits of evaluations, it is not feasible, cost-effective or appropriate to fully evaluate every programme and policy.

The department prioritises evaluation effort and resourcing based on the criteria:

Total funding allocated for the programme

Internal priority (importance to the department’s and Australian Government’s goals)

External priority (importance to external stakeholders)

Overall risk rating of the programme

Track record (previous evaluation, strength of performance monitoring, lessons learnt).

The department’s Evaluation Plan (published internally) is developed in consultation with divisions using the above criteria as a guide to how and when evaluations should be conducted. To reduce duplication and leverage effort, the department takes account of audit and research activity when developing its Evaluation Plan, adopting a whole-of-department approach to evaluation planning.

The Evaluation Plan uses tiers to identify evaluations of highest priority and strategic importance. See table on page 12.

Tier one—evaluations of highest priority and strategic importance

Tier two—second order of strategic importance and may include methodologies to test the performance monitoring of the programme.

Tiers three A & three B—programmes of lesser strategic importance, low risk, or a terminated programme. Tier three A includes single payment grants.

Alterations to the evaluation plan

The Evaluation Plan is not cast in stone. The department recognises that circumstances change as months and years pass. It is important that ongoing intelligence about programmes’ evolution, and adjustments in Government policy-setting, inform the Evaluation Plan. Generally, the Plan is updated annually but alterations can be made at any time.

11

Scaling evaluation

The scale of an evaluation should be proportionate to the size, significance and risk profile of the programme (sometimes referred to as ‘fit for purpose’). This means that evaluative effort and resources should not be expended beyond what is required to satisfy public accountability and the needs of decision-makers.

The selection of an evaluation method should also take into account the programme lifecycle and feasibility of the method, the availability of data and value for money. Evaluations should be appropriate to the particulars of a given programme; they are not a ‘one size fits all’ arrangement.

Options could include light touch/desk reviews, grouping a suite of programmes into one evaluation, sectoral reviews and research projects that focus on the effectiveness and impact elements of programme performance.

The increased accountability and scrutiny under the PGPA Act further reinforces the critical role of the Evaluation Unit as the authoritative source for guidance on evaluation. To ensure evaluative effort and resources are used to maximum benefit, it is a requirement that divisions consult with the Evaluation Unit when determining an appropriate scope and methodology for their evaluations.

Responsibility for conducting evaluations

Priority, scale and methodology will inform who will conduct an evaluation. Subject-matter or technical expertise should also be considered, as should resource availability, time and cost. Options include: an individual or small team; working collaboratively across divisions or other agencies; engaging the Evaluation Unit to conduct the evaluation; working in partnership with the Evaluation Unit; or engaging an external consultant or academic.

The policy owner is accountable for managing the evaluation with assistance from the programme delivery team.

There should be a level of independence from the areas responsible for policy and programme delivery. For evaluations of lesser strategic importance or terminated programmes this could be through an independent member on a Steering Committee or Reference Group. Seconding individuals from divisions separate from the policy and delivery areas is a viable option to provide some independence, build capability and alleviate resourcing constraints.

12

Evaluation approach

The table below outlines the fundamental issues which should be considered in determining the scale of an evaluation. There may also be times where Cabinet or central agencies determine the type of evaluation and when it should be conducted. The Evaluation Unit can provide advice and guidance on planning and conducting evaluations across all of the tiers.

Tier One Tier Two Tiers Three A & Three B

Characteristics of Programme

Significant funding

Highest risk

Strategically significant

May be Flagship programme

High public profile and expectations

Politically significant

Moderate funding

Medium risk

New or untried programme that requires testing of assumptions and or data

Medium level of strategic importance

Moderate public profile and expectations

Relatively small funding or single payment grants

Low risk

Lesser strategic importance

Not widely publicised

Similar to other programmes that have been subject to evaluation activity

Likely characteristics of Evaluation

Formal process

Extensive consultation

High resource allocation

Central agencies may be involved

Wide public release

Greater level of data collection and analysis

Multiple evaluation points during the development and implementation

Regular process reporting

Informal process

Can be completed internally

Limited data requirements

Low resource allocation

Limited consultation

Low profile release

Evaluation Unit role

Evaluation Unit should be consulted on development of methodology/terms of reference.

Evaluation Unit involvement either conducting evaluation or represented on Steering Committee.

Independent evaluator could be internal or external to the department.

Evaluation Unit should be consulted on development of methodology/terms of reference.

Could be conducted by or in partnership with the Evaluation Unit or by independent person/team.

Evaluator could be Internal or external to the department.

Evaluation Unit should be consulted on development of methodology/terms of reference.

Reference Group should have a level of independence.

13

Lessons learned — taking advantage of completed evaluations

Policy making is a process of continuous learning, rather than a series of one-off, unrelated decisions. Effective use of organisational knowledge in policy development enables policy makers to learn from previous successes and failures to develop better policy. Policy and programme evaluations provide the evidence base to inform best practice expenditure of public funding and the development of policy. 6

Evaluations increase understanding about the impact of government policy, programmes, regulations and processes, and are just one of the sources of performance information that help the department to assess whether it is achieving its strategic priorities. Along with research and audit findings, the outcomes from evaluations are a valuable resource; they support evidence-based policy and the continual improvement and evolution of programmes.

Organisational learning uses past experiences to improve policy, recognising that the government may repeatedly deal with similar problems. Developing a culture of organisational learning can make an organisation more responsive to the changes in its environment and facilitate adaptation to these changes. 7

It is expected that evaluation findings will be communicated widely across the department, particularly to inform decision-making, with resulting recommendations acted upon routinely. It is also expected that evaluation findings and emerging trends are captured, reported and communicated, and brought to the attention of the Executive Board as appropriate.

To improve the sharing of evaluation findings and make them accessible across the department, all evaluations commissioned or undertaken by the department will be accessible internally through the Research Management System. The System provides significant insight to the approaches used to design policy and implement departmental programmes.

6 Department of Industry and Science (2014), Policy Development Toolkit, Canberra.7 Ibid.

14

Building capacity and capability

Building capacity and capability in performance measurement and evaluation is not limited to technical skills and knowledge. Performance measurement and evaluation needs to be integrated into the way we work and think.

Fostering a culture of evaluative thinking

As we are called to adapt to changing economic and policy environments, measuring how we are performing and providing credible evidence becomes paramount. This cannot be achieved without a shift to a culture of evaluative thinking and continuous improvement.

Organisational culture significantly influences the success of evaluation activity and requires strong leadership. This department is building a supportive culture, led from the Executive, that encourages self-reflection, values results and innovation, looks for better ways of doing things, shares knowledge and learns from mistakes.

Without such a culture, evaluation is likely to be resisted, perceived as a threat rather than an opportunity and treated as a compliance exercise.

To develop a culture of evaluative thinking the department requires:

a clear vision for evaluation and continuous improvement

clear responsibilities and expectations to empower staff, along with appropriate training and guidance material

knowledge-sharing and tolerance for mistakes to encourage learning and improve performance

a culture of reward to showcase effective evaluations

support for the outcomes of robust evaluation to build trust, welcoming the identification of problems or weaknesses.8

Building evaluation capability

A culture of evaluative thinking and capability building go hand in hand—both are required to achieve a high level of evaluation maturity within a high performing organisation.

Conducting an evaluation requires significant knowledge, skill and experience. The department is committed to building performance measurement and evaluation capability and technical skills to support staff in planning and conducting evaluations, and undertaking performance monitoring.

Learning continues for staff in the Evaluation Unit and across the department in specialised evaluation techniques and methods. The Evaluation Unit is made up of evaluation professionals who are members of the Australasian Evaluation Society (AES) and other professional organisations.

The role and responsibilities of the Evaluation Unit include building capability through providing expert advice and guidance, and ensuring the department is meeting its external reporting accountabilities.

8 ACT Government (2010), Evaluation Policy and Guidelines, Canberra.15

In addition to the points noted on page 7:

The Evaluation Unit can advise on the…

Development of programme and policy logics

Conduct of programme and policy evaluations(prospective, post-commencement, monitoring and impact evaluations)

Preparation of:

evaluation lifecycles(to guide evaluations throughout a programme’s lifecycle)

evaluation plans(to guide a single evaluation)

Development of programme level performance measures (KPIs)

Application of departmental evaluation principles.

Supporting guidance material

The Evaluation Unit have developed comprehensive guidance material to support on-the-job learning. The topics covered range from planning for an evaluation to how to conduct an evaluation or develop Terms of Reference. The material is designed to be used in conjunction with advice available from the Evaluation Unit.

The department also offers targeted learning on programme logic and developing performance measures.

16

Measures of success

Evaluation maturity

Developing and maintaining evaluation maturity is an ongoing process that must be balanced with other organisational objectives. This Strategy establishes a framework to guide the department through the stages of maturity which encompass good evaluation practices.

To establish a baseline from which we can identify strengths and weaknesses and priorities for improvement, the department has assessed its current evaluation maturity. While it is following best practice in some elements of evaluation maturity, overall it is at the developing stage of maturity.

Maturity level

Beginning Developing(Current rating)

Embedded(June 2016)

Leading(June 2018)

17

Integrated Awareness of the benefits of evaluation is low.

Evaluation is seen as a compliance activity and threat.

Fear of negative findings and recommendations leads to a perception of ‘mandatory optimism’ regarding programme performance.

Insufficient resources allocated to evaluation activities.

Evaluation and performance measurement skills and understanding limited, despite pockets of expertise.

Appreciation of the benefits of evaluation improving.

Evaluation is being viewed as core business for the department, not simply a compliance activity.

A culture of evaluative thinking and continual improvement is introduced and communicated across the department.

Skills in performance measurement and evaluation developed through targeted training and guidance materials.

Evaluation website and guidance materials developed.

The role of the Evaluation Unit is widely communicated. Unit seen as the authoritative source for advice.

Developing further expertise in the Evaluation Unit.

A culture of evaluative thinking and continual improvement is embedded across the department, with lessons learnt being acted upon.

Evaluation is seen as an integral component of sound performance management.

General evaluation skills widespread.

Improved skills and knowledge in developing quality performance measures.

Evaluation Unit team members have high order skills and experience which is leveraged by the agency.

Evaluation Unit team members hold and are encouraged to undertake formal qualifications in evaluation and related subjects.

Evaluations motivate improvements in programme design and policy implementation.

Demonstrated commitment to continuous learning and improvement throughout the agency.

Department is recognised for its evaluation and performance monitoring expertise, and innovative systems and procedures.

Fit for Purpose

Frequency and quality of evaluation is lacking.

Guidelines for prioritising and scaling evaluation activity are used.

Priority programmes are evaluated.

Evaluations use fit for purpose methodologies.

Evaluation effort is scaled accordingly.

Specialist and technical skills well developed to apply appropriate methodologies.

Evidence-based

Data holdings and collection methods are insufficient or of poor quality.

Planning at programme outset improves data holdings and collection methods.

Developing skills and knowledge in

A range of administrative and other data is used in the assessment of performance.

Robust research and analytical

The department continually develops and applies robust research and analytical methods to assess impact and outcomes.

18

applying robust research and analytical methods to assess impact and outcomes.

Quality of evaluations is improving.

methods are used to assess impact and outcomes.

Evaluations conform to agency standards.

Evaluation and performance measurement conforms to recognised standards of quality.

Specific and timely

Effort and resources are allocated in an ad hoc and reactive manner with little foresight.

Developing performance information at the inception of a programme is ad hoc and of variable quality.

Evaluation activity is coordinated. An Evaluation plan is in place and regularly monitored.

Strategically significant and risky programmes are prioritised.

Planning for evaluation and performance monitoring is being integrated at the programme design stage.

All programmes are assessed for being ‘evaluation ready’.

The department employs strategic risk-based, whole-of-department criteria to prioritise evaluation effort. Evaluation Plans are updated annually and progress is monitored on a regular basis.

Planning for evaluation and performance measurement is considered a fundamental part of policy and programme design.

All programmes have programme logic, performance and evaluation plans in place.

The department’s approach to evaluation and performance planning is seen as the exemplar.

All programmes assessed as ‘evaluation ready’.

Transparent Findings and recommendations held in programme and policy areas.

No follow up on the implementation of recommendations.

Findings and recommendations viewed as an opportunity to identify lessons learned.

Evaluation findings and recommendations available in the research management system to improve the dissemination of lessons learned and inform policy development.

Findings widely disseminated and drive better performance.

Website and guidance materials valuable resource for staff.

Evaluation findings and reports are published where appropriate.

Findings are consistently used to optimise delivery and have influence outside the department.

Independent Independent conduct and

There is an improved level of

All evaluations include a level of

19

governance of evaluations is lacking.

Evaluations are conducted and overseen by the policy or programme areas responsible for delivery of the programme.

independence in the conduct and governance of evaluations.

independence.

9Reviewing the Evaluation Strategy

This Strategy will be reviewed in 12 months (June 2016) to assess whether it is meeting the needs of the department. The measures of success will include that it is:

Consistent with the PGPA Act Efficiently allocating evaluation effort Leading to more effective conduct of evaluations Fostering a culture of evaluative thinking Ultimately contributing to more effective programmes.

Results of the review will be communicated to the Executive Board. The review will include an assessment of the department’s level of evaluation maturity, one year on.

9The evaluation maturity table is adapted from: ACT Government (2010), Evaluation Policy and Guidelines, Canberra, p.17.

20

APPENDIX A – Glossary

Term Definition

ActivityA distinct effort of an entity undertaken to achieve specific results or aim. An entity’s purpose may be achieved through a single activity or multiple activities.

Baseline Information collected before or at the start of a programme that provides a basis for planning and/or assessing subsequent programme progress and/or impact.

Component A programme or initiative within the department.

Cost Benefit Analysis/Cost Effectiveness Analysis

Evaluation of the relationship between programme costs and outcomes. Can be used to compare different programmes with the same outcome to determine the most efficient intervention.

Counterfactual A hypothetical statement of what would have happened (or not) had the programme not been implemented.

Evaluation A systematic assessment of the operation and/or the outcomes of a programme or policy, compared to a set of explicit or implicit standards, as a means of contributing to the improvement of the policy or programme.

Expenditure Review Principles

The principles used to review Australian Government activities and in the assessment of proposals for new government activities. They include: appropriateness, effectiveness, efficiency, integration, performance assessment and strategic policy alignment.

Formative evaluation Generally undertaken while a programme is forming (prior to implementation), this type of evaluation is typically used to identify aspects of a programme that can be improved to achieve better results.

Impact evaluation Impact evaluations are usually larger and more complex evaluations which allow for assessment of a programme’s performance. Where possible they would test this against a ‘counterfactual’.

Key Performance Indicator

Quantitative or qualitative variables that provide a reliable way to measure intended changes. Performance indicators are used to observe progress and to compare actual results with expected results.

Monitoring evaluation Monitoring evaluation draws on performance information to monitor the programme’s progress. Usually suited to programmes which have hit a “business as usual” patch in the programme lifecycle, the findings provide: an indication of performance, contributing to the measurement of the department’s strategic objectives; a basis for future reviews; and the opportunity to test the programme’s data sources to see whether they are providing the performance information required.

Outcome A result or effect that is caused by or attributable to the programme.

21

Output The products, goods and services which are produced by the programme.

Programme logic A management tool that presents the logic of a programme in a diagram (with related descriptions). It links longer-term objectives to a programme’s intermediate and shorter-term objectives. The programme logic is used to ensure the overall programme considers all the inputs, activities and processes needed to achieve the intended programme outcomes.

Post-commencement evaluation

Post-commencement evaluations focus on initial implementation, allowing decision-makers to identify early issues regarding programme administration and delivery and take corrective action.

Prospective evaluation Prospective evaluations ensure the programme is prepared for robust evaluations from the outset, using a programme logic model to refine the proposal and determine performance measures and data sources.

Purpose In the context of the PGPA Act, the strategic objectives, functions or roles of an entity, against which entities undertake activities.

Summative evaluation Summative evaluations generally report when the programme has been running long enough to produce results, although they should be initiated during the programme design phase. They assess positive and negative results, as well as intended and unintended outcomes. This form of evaluation is used to determine whether the programme caused demonstrable effects on specifically defined target outcomes.

22