strategies for tackling differences: learning from evaluability assessments of horizontal...

20
CES 2012 Conference Halifax Monday, May 14, 2012 | 15:1516:45 Strategies for tackling differences: Learning from evaluability assessments of horizontal initiatives to prepare for evaluations of program activity architecture components

Upload: fbertrand

Post on 08-May-2015

230 views

Category:

Documents


0 download

DESCRIPTION

The Treasury Board of Canada now requires full evaluation coverage for government spending. As a result, federal evaluation plans increasingly include evaluations of program activity architecture components comprising a wide range of activities beyond the individual program level. In the absence of pre-existing program theory and common performance frameworks, these broad evaluations pose significant challenges during evaluation design (i.e., linking outcomes across activities, initiatives, programs and organizations). This approach also has implications for data collection, as differences need to be identified, quantified and qualified across varied and often heterogeneous stakeholder groups. New strategies must be developed to address these challenges at the evaluation planning and assessment stages, particularly to ensure that stakeholders are effectively identified and engaged in the process. The presentation will illustrate the lessons learned from recent evaluability assessments of two horizontal initiatives and discuss how this experience informed the evaluation of program activity architecture components.

TRANSCRIPT

Page 1: Strategies for tackling differences: Learning from evaluability assessments of horizontal initiatives to prepare for evaluations of program activity architecture components

CES 2012 Conference HalifaxMonday, May 14, 2012 | 15:15‐16:45

Strategies for tackling differences: Learning from evaluability assessments of horizontal 

initiatives to prepare for evaluations of program activity architecture components

Page 2: Strategies for tackling differences: Learning from evaluability assessments of horizontal initiatives to prepare for evaluations of program activity architecture components

2

Outline & Objective

Objective Sharing lessons learned from conducting evaluability assessments of 

horizontal initiatives  With a view to inform the conduct of evaluations of program activity 

architecture components (PAA) Context Overview – evaluation assessments projects:  Horizontal  – Genomics Research and Development Initiative (GRDI)  Horizontal  – Major Project Management Office Initiative (MPMOI) PAA‐level – Minerals & Metals, Markets, Investment & Innovation (MMMII)

Key issues:  #1: Multiple stakeholders #2: Evaluation program theory

Strategies and remaining challenges Questions?

Page 3: Strategies for tackling differences: Learning from evaluability assessments of horizontal initiatives to prepare for evaluations of program activity architecture components

3

Context

The Treasury Board of Canada requires full evaluation coverage

Federal evaluation plans increasingly include evaluations of program activity architecture (PAA) components 

PAA comprises  a wide range of activities beyond the individual program level (sub, sub‐sub and sub‐sub‐sub levels)

Departmental Performance Measurement Frameworks (PMFs) set out the expected results and the performance measures to be reported for all programs identified in the PAA components

The indicators in the departmental PMFs are limited in number and focus on supporting departmental monitoring and reporting

PMFs are developed with a view to support effective evaluations

Foreseen opportunities for evaluators and for organizations

Page 4: Strategies for tackling differences: Learning from evaluability assessments of horizontal initiatives to prepare for evaluations of program activity architecture components

4

Context

However, PAA‐level evaluations pose significant challenges during evaluation design (i.e., linking outcomes across activities, initiatives, programs and organizations). 

Absence of pre‐existing program theory and integrated performance measurement frameworks

Evaluation program theory need to go beyond linking RMAFs to PAA components

Many PAAs boxes have never been evaluated

Need to spend more time discovering what’s “inside the boxes”…

As differences need to be identified, quantified and qualified across varied and often heterogeneous stakeholder groups

Implications for evaluability assessment projects

Page 5: Strategies for tackling differences: Learning from evaluability assessments of horizontal initiatives to prepare for evaluations of program activity architecture components

5

Overview – evaluation assessment projects

Horizontal – GRDI Horizontal – MPMOI

Goal • Build and maintain genomics research capacity in government departments.

• Protect and improve human health, develop new treatments for chronic and infectious diseases, protect the environment, and manage agricultural and natural resources in a sustainable manner. 

• Support evidence‐based decision‐making, policy/standards/ regulations development, as well as facilitate the development of Canadian commercial enterprises.

• Address federal regulatory systemic issues and capacity deficiencies for major resource projects and provide interim capacity funding for Aboriginal consultation. 

• Via the Major Project Management Office (housed at NRCan):i) provide overarching project 

coordination, management and accountability; and 

ii) undertake research and identify options that drive further performance improvements to the federal regulatory system.

Page 6: Strategies for tackling differences: Learning from evaluability assessments of horizontal initiatives to prepare for evaluations of program activity architecture components

6

Overview – evaluation assessment projects

Horizontal – GRDI Horizontal – MPMOI

Dep’t & Agen‐cies

• Agriculture and Agri‐food • Environment Canada • Fisheries and Oceans • Health Canada• Public Health Agency • National Research Council  • Natural Resources 

• Canadian Environmental Assessment Agency • Environment Canada• Fisheries and Oceans• Aboriginal Affairs and Northern Development Canada• Transport Canada; National Energy Board • Canadian Nuclear Safety Commission

Budget • $20 million/year • $30 million/year

Mgmt. • Mgmt Committees (ADM and working groups)

• Mgmt Committees (DM, ADM, DG, dep’t committees and working groups)

Main Stake‐holders

• Mainly federal scientists• Academic researchers• Private sector• Others

• Natural resource industries• Aboriginal groups• Environmental groups• Federal organizations• Provincial organizations• Others

Page 7: Strategies for tackling differences: Learning from evaluability assessments of horizontal initiatives to prepare for evaluations of program activity architecture components

7

Overview – evaluation assessment projects

Horizontal – GRDI Horizontal – MPMOI

EvaluationCommittee

• Interdepartmental Evaluation Advisory Committee (IEAC)• Program staff + Evaluation staff• Observers/stakeholders

Approach/ Methods

• Data availability and quality assessment• Document, file and data review• Program rationale and profile• Revision/adjustment of the logic model (MPMOI)• Development of evaluation questions• Individual consultations with working groups from each organization• Half‐day roundtable workshop• Development of the Data Collection Matrix (DCM)• Development of evaluation methodology and options• Characterization of key information needed for the evaluation• Final workshop for validation

Page 8: Strategies for tackling differences: Learning from evaluability assessments of horizontal initiatives to prepare for evaluations of program activity architecture components

8

Overview – evaluation assessment projects

PAA‐level –MMMIIGoal • Minerals & Metals, Markets, Investment & Innovation (MMMII)

• Mining Scientific Research and Innovations (1.1.1.1)• Socio‐economic Minerals and Metals Research and Knowledge for 

Investments and Access to Global Markets (1.1.1.2)Compo‐nents

• 2  entire Branches (7 divisions):• Minerals, Metals and Materials Knowledge Branch (MMMKB) • Minerals, Metals and Materials Policy Branch (MMMPB)• Various programs/groups in 2 additional Branches:• CANMET Mining and Mineral Sciences Laboratories (MMSL)• CANMET Materials Technology Laboratory (MTL)

Budget • $20 million/year + specific funding for the relocation of MTL to Hamilton

Mgmt. • ADM‐level overall and DG‐level by Branch

Main Stake‐holders

• NRCan & other federal org.• Provincial organizations• International organizations• Academic/research institutions• Industries/Private sector

• Canadian embassies/commissions • Aboriginal groups and remote communities• Non‐governmental organizations and other interest groups

Page 9: Strategies for tackling differences: Learning from evaluability assessments of horizontal initiatives to prepare for evaluations of program activity architecture components

9

PAA – MMMII PAA Overview

MTL relocation in Hamilton

Page 10: Strategies for tackling differences: Learning from evaluability assessments of horizontal initiatives to prepare for evaluations of program activity architecture components

10

PAA – MMMII PAA Overview

MTL relocation in Hamilton

Statistics and economic analysis products

Domestic and int’l policy advice and events, sector planning

Emerging materials/ eco‐material research projects

Mineral extraction and processing research projects 

Page 11: Strategies for tackling differences: Learning from evaluability assessments of horizontal initiatives to prepare for evaluations of program activity architecture components

11

Main difference between horizontal and PAA‐level projects?

Evaluation Advisory Committees

GRDI and MPMOI

• Evaluation Advisory Committee (EAC)• Mixed‐team (internal and external) evaluators• Key program management staff

• Interdepartmental Evaluation Advisory Committee (IEAC)• Mixed‐team (internal and external) evaluators• 1 internal evaluator from each participating department/agency• 1‐2 program staff from each participating department/agency

MMMII • No committee• Ad‐hoc targeted consultations for information requests and feedback/input

Page 12: Strategies for tackling differences: Learning from evaluability assessments of horizontal initiatives to prepare for evaluations of program activity architecture components

12

Issue #1  – Multiple stakeholders

Having multiple stakeholders poses significant challenges: “Patchy” internal and external stakeholder identification for:

• Risk‐based planning and design of the evaluation• Consultation during the evaluation assessment/planning stage• Data collection during the evaluation project (interviews, survey)

“Dispersed” key information and data needed for the evaluation• Different Departments/Branches have their own information• Little data at the level of the PAA sub‐activities (PAA specific)• Difficult to roll‐up data at the program/PAA level (e.g. financial)

Variable level of expectation/understanding of the evaluation process Variable level of engagement

• Less engagement when the evaluation scope covers entire Branches /divisions/projects, including activities not tied to funding renewal

• Salience of the process is lower and input is mainly on a voluntary basis (potentially impacting on the quality and timeliness of evaluation)

Page 13: Strategies for tackling differences: Learning from evaluability assessments of horizontal initiatives to prepare for evaluations of program activity architecture components

13

Issue #2 – Evaluation program theory at PAA level

The design of PAA‐level evaluations poses significant challenges: Lack of coherent program theory at PAA‐level

• Program theory is the backbone of the evaluation• PAA theory is mainly at strategic level as defined in the PMF• Frequent absence of PAA‐level RMAF (or RMAF used differently by 

different units)• Absence of Logic Model and underlying program theory

Lack of shared understanding and vision of PAA logic• Individual vision of the contribution of their unit to the PAA strategic 

objectives• Lack of awareness and different levels of understanding of the 

contribution of other units• Potential tension when it comes to defining the individual and collective 

contribution to the PAA strategic objectives

Page 14: Strategies for tackling differences: Learning from evaluability assessments of horizontal initiatives to prepare for evaluations of program activity architecture components

14

Strategy #1  – Well‐structured and engaged evaluation committee

Committee created at the planning stage has proven effective to: Bring together two communities of practice (program and evaluation)

Facilitate the identification of internal and external stakeholders

Ensure that all stakeholders agree on the evaluation process

Manage expectations and consider individual needs with respect to:

• How findings will be presented in the evaluation report

• Who will be accountable in responding to recommendations

• Other concurrent activities (funding renewals, audits, etc.)

Determine the data requirements for the evaluation 

Facilitate buy‐in on the development of evaluation method options

Increase the level of awareness in participating organizations, which facilitates the consultation during the evaluation

Page 15: Strategies for tackling differences: Learning from evaluability assessments of horizontal initiatives to prepare for evaluations of program activity architecture components

15

Strategy #2 – Participatory logic model and program theory development and review

In close collaboration with the committee (and consultation of individual groups/units):

Revisit the PAA‐logic at strategic level and translate into program theory that works at the operational level

Translate the theory into a PAA‐logic model and validate 

Discuss up‐front how relevance and efficiency/economy will be evaluated 

Design specific evaluation questions and validate

Design a preliminary evaluation approach and indicators (data collection matrix) and validate

Test methods and indicators against available data and other factors

Reconvene all committee members at the end to fully validate the selected approach (e.g., half‐day roundtable workshop)

Ensure that the outcome of the consultation is captured/disseminated

Page 16: Strategies for tackling differences: Learning from evaluability assessments of horizontal initiatives to prepare for evaluations of program activity architecture components

16

Remaining challenges

Instability 

PAA‐level evaluation challenges 

Page 17: Strategies for tackling differences: Learning from evaluability assessments of horizontal initiatives to prepare for evaluations of program activity architecture components

17

Remaining challenges

PAA‐level heterogeneity • Mixed bag of organizations, branches, divisions, programs, projects and initiatives with highly diverse:• Values and culture• Business processes/settings • Stakeholder groups

• Always the presence of outlier components• Inconsistent rationale for inclusion/exclusion of PAA components • Included on the basis of funding/special allocations and/or politics• Not always aligned with PAA strategic objectives/PAA logic

PAA‐level instability • Changes during the evaluation project and evaluation period (and beyond)

• Multiplies challenges to assess relevance and efficiency/economy

Page 18: Strategies for tackling differences: Learning from evaluability assessments of horizontal initiatives to prepare for evaluations of program activity architecture components

18

Remaining challenges

PAA‐level connectivity• Artificial/conceptual connections across PAA components• Underestimated connections with other PAA components/contribution of 

other PAAs

Other challenges • PAA‐level recommendations

• Acceptability (what is in it for me?)• Accountability (who is accountable?)• Actionability (how they will be put into action?)

• PAA‐level stakeholder fatigue• Key staff responsible for components under several PAAs are involved in 

multiple, concurrent evaluations

Page 19: Strategies for tackling differences: Learning from evaluability assessments of horizontal initiatives to prepare for evaluations of program activity architecture components

19

Take‐home message

… please consider using an evaluation advisory committee!

When evaluating at PAA‐level, often spend too much time discovering what’s “inside the boxes”…

Page 20: Strategies for tackling differences: Learning from evaluability assessments of horizontal initiatives to prepare for evaluations of program activity architecture components

20

Thank you for your time and feedback

CONTACT INFO

Frédéric Bertrand, MSc CEVice‐President, evaluation | Science‐Metrix

514‐495‐6505 x117frederic.bertrand@science‐metrix.com

Michelle Picard Aitken, MScSenior Research Analyst | Science‐Metrix

514‐495‐6505 x125michelle.picard‐aitken@science‐metrix.com

Andrea Ventimiglia, BSc MJResearch Analyst | Science‐Metrix

514‐495‐6505 x124andrea.ventimiglia@science‐metrix.com

AKNOWLEDGEMENTJulie Caruso, MLIS

Research Analyst | Science‐Metrix

Science‐Metrix 1335, Mont‐Royal EstMontreal, Quebec  H2J 1Y6Telephone: 514‐495‐6505Fax: 514‐495‐6523Courriel: info@science‐metrix.com

WEB SITEwww.science‐metrix.com

Questions?