Strategies for tackling differences: Learning from evaluability assessments of horizontal initiatives to prepare for evaluations of program activity architecture components
The Treasury Board of Canada now requires full evaluation coverage for government spending. As a result, federal evaluation plans increasingly include evaluations of program activity architecture components comprising a wide range of activities beyond the individual program level. In the absence of pre-existing program theory and common performance frameworks, these broad evaluations pose significant challenges during evaluation design (i.e., linking outcomes across activities, initiatives, programs and organizations). This approach also has implications for data collection, as differences need to be identified, quantified and qualified across varied and often heterogeneous stakeholder groups. New strategies must be developed to address these challenges at the evaluation planning and assessment stages, particularly to ensure that stakeholders are effectively identified and engaged in the process. The presentation will illustrate the lessons learned from recent evaluability assessments of two horizontal initiatives and discuss how this experience informed the evaluation of program activity architecture components.
Similar to Strategies for tackling differences: Learning from evaluability assessments of horizontal initiatives to prepare for evaluations of program activity architecture components
Similar to Strategies for tackling differences: Learning from evaluability assessments of horizontal initiatives to prepare for evaluations of program activity architecture components (20)
Strategies for tackling differences: Learning from evaluability assessments of horizontal initiatives to prepare for evaluations of program activity architecture components
2. Outline & Objective
Objective
Sharing lessons learned from conducting evaluability assessments of
horizontal initiatives
With a view to inform the conduct of evaluations of program activity
architecture components (PAA)
Context
Overview – evaluation assessments projects:
Horizontal – Genomics Research and Development Initiative (GRDI)
Horizontal – Major Project Management Office Initiative (MPMOI)
PAA‐level – Minerals & Metals, Markets, Investment & Innovation (MMMII)
Key issues:
#1: Multiple stakeholders
#2: Evaluation program theory
Strategies and remaining challenges
Questions?
2
3. Context
The Treasury Board of Canada requires full evaluation coverage
Federal evaluation plans increasingly include evaluations of program
activity architecture (PAA) components
PAA comprises a wide range of activities beyond the individual
program level (sub, sub‐sub and sub‐sub‐sub levels)
Departmental Performance Measurement Frameworks (PMFs) set
out the expected results and the performance measures to be
reported for all programs identified in the PAA components
The indicators in the departmental PMFs are limited in number and
focus on supporting departmental monitoring and reporting
PMFs are developed with a view to support effective evaluations
Foreseen opportunities for evaluators and for organizations
3
4. Context
However, PAA‐level evaluations pose significant challenges during
evaluation design (i.e., linking outcomes across activities, initiatives,
programs and organizations).
Absence of pre‐existing program theory and integrated performance
measurement frameworks
Evaluation program theory need to go beyond linking RMAFs to PAA
components
Many PAAs boxes have never been evaluated
Need to spend more time discovering what’s “inside the boxes”…
As differences need to be identified, quantified and qualified across
varied and often heterogeneous stakeholder groups
Implications for evaluability assessment projects
4
5. Overview – evaluation assessment projects
Horizontal – GRDI Horizontal – MPMOI
Goal • Build and maintain genomics • Address federal regulatory systemic
research capacity in government issues and capacity deficiencies for
departments. major resource projects and
• Protect and improve human health, provide interim capacity funding for
develop new treatments for Aboriginal consultation.
chronic and infectious diseases,
protect the environment, and • Via the Major Project Management
manage agricultural and natural Office (housed at NRCan):
resources in a sustainable manner. i) provide overarching project
• Support evidence‐based decision‐ coordination, management and
making, policy/standards/ accountability; and
regulations development, as well ii) undertake research and identify
as facilitate the development of options that drive further
Canadian commercial enterprises. performance improvements to
the federal regulatory system.
5
6. Overview – evaluation assessment projects
Horizontal – GRDI Horizontal – MPMOI
Dep’t & • Agriculture and Agri‐food • Canadian Environmental Assessment Agency
Agen‐ • Environment Canada • Environment Canada
cies • Fisheries and Oceans • Fisheries and Oceans
• Health Canada • Aboriginal Affairs and Northern
• Public Health Agency Development Canada
• National Research Council • Transport Canada; National Energy Board
• Natural Resources • Canadian Nuclear Safety Commission
Budget • $20 million/year • $30 million/year
Mgmt. • Mgmt Committees (ADM • Mgmt Committees (DM, ADM, DG, dep’t
and working groups) committees and working groups)
Main • Mainly federal scientists • Natural resource industries
Stake‐ • Academic researchers • Aboriginal groups
holders • Private sector • Environmental groups
• Others • Federal organizations
• Provincial organizations
• Others
6
7. Overview – evaluation assessment projects
Horizontal – GRDI Horizontal – MPMOI
Evaluation • Interdepartmental Evaluation Advisory Committee (IEAC)
Committee • Program staff + Evaluation staff
• Observers/stakeholders
Approach/ • Data availability and quality assessment
Methods • Document, file and data review
• Program rationale and profile
• Revision/adjustment of the logic model (MPMOI)
• Development of evaluation questions
• Individual consultations with working groups from each organization
• Half‐day roundtable workshop
• Development of the Data Collection Matrix (DCM)
• Development of evaluation methodology and options
• Characterization of key information needed for the evaluation
• Final workshop for validation
7
8. Overview – evaluation assessment projects
PAA‐level –MMMII
Goal • Minerals & Metals, Markets, Investment & Innovation (MMMII)
• Mining Scientific Research and Innovations (1.1.1.1)
• Socio‐economic Minerals and Metals Research and Knowledge for
Investments and Access to Global Markets (1.1.1.2)
Compo‐ • 2 entire Branches (7 divisions):
nents • Minerals, Metals and Materials Knowledge Branch (MMMKB)
• Minerals, Metals and Materials Policy Branch (MMMPB)
• Various programs/groups in 2 additional Branches:
• CANMET Mining and Mineral Sciences Laboratories (MMSL)
• CANMET Materials Technology Laboratory (MTL)
Budget • $20 million/year + specific funding for the relocation of MTL to Hamilton
Mgmt. • ADM‐level overall and DG‐level by Branch
Main • NRCan & other federal org. • Canadian embassies/commissions
Stake‐ • Provincial organizations • Aboriginal groups and remote
holders • International organizations communities
• Academic/research institutions • Non‐governmental organizations and
• Industries/Private sector other interest groups
8
10. PAA – MMMII PAA Overview
Domestic
Emerging Mineral and int’l Statistics
materials/ extraction policy and
MTL relocation
eco‐ and advice and economic
in Hamilton
material processing events, analysis
research research sector products
projects projects planning
10
11. Main difference between horizontal and PAA‐level
projects?
Evaluation Advisory Committees
• Evaluation Advisory Committee (EAC)
• Mixed‐team (internal and external) evaluators
• Key program management staff
GRDI and
MPMOI • Interdepartmental Evaluation Advisory Committee (IEAC)
• Mixed‐team (internal and external) evaluators
• 1 internal evaluator from each participating department/agency
• 1‐2 program staff from each participating department/agency
MMMII • No committee
• Ad‐hoc targeted consultations for information requests and
feedback/input
11
12. Issue #1 – Multiple stakeholders
Having multiple stakeholders poses significant challenges:
“Patchy” internal and external stakeholder identification for:
• Risk‐based planning and design of the evaluation
• Consultation during the evaluation assessment/planning stage
• Data collection during the evaluation project (interviews, survey)
“Dispersed” key information and data needed for the evaluation
• Different Departments/Branches have their own information
• Little data at the level of the PAA sub‐activities (PAA specific)
• Difficult to roll‐up data at the program/PAA level (e.g. financial)
Variable level of expectation/understanding of the evaluation process
Variable level of engagement
• Less engagement when the evaluation scope covers entire Branches
/divisions/projects, including activities not tied to funding renewal
• Salience of the process is lower and input is mainly on a voluntary basis
(potentially impacting on the quality and timeliness of evaluation)
12
13. Issue #2 – Evaluation program theory at PAA level
The design of PAA‐level evaluations poses significant challenges:
Lack of coherent program theory at PAA‐level
• Program theory is the backbone of the evaluation
• PAA theory is mainly at strategic level as defined in the PMF
• Frequent absence of PAA‐level RMAF (or RMAF used differently by
different units)
• Absence of Logic Model and underlying program theory
Lack of shared understanding and vision of PAA logic
• Individual vision of the contribution of their unit to the PAA strategic
objectives
• Lack of awareness and different levels of understanding of the
contribution of other units
• Potential tension when it comes to defining the individual and collective
contribution to the PAA strategic objectives
13
14. Strategy #1 – Well‐structured and engaged
evaluation committee
Committee created at the planning stage has proven effective to:
Bring together two communities of practice (program and evaluation)
Facilitate the identification of internal and external stakeholders
Ensure that all stakeholders agree on the evaluation process
Manage expectations and consider individual needs with respect to:
• How findings will be presented in the evaluation report
• Who will be accountable in responding to recommendations
• Other concurrent activities (funding renewals, audits, etc.)
Determine the data requirements for the evaluation
Facilitate buy‐in on the development of evaluation method options
Increase the level of awareness in participating organizations, which
facilitates the consultation during the evaluation
14
15. Strategy #2 – Participatory logic model and program
theory development and review
In close collaboration with the committee (and consultation of individual
groups/units):
Revisit the PAA‐logic at strategic level and translate into program theory
that works at the operational level
Translate the theory into a PAA‐logic model and validate
Discuss up‐front how relevance and efficiency/economy will be evaluated
Design specific evaluation questions and validate
Design a preliminary evaluation approach and indicators (data collection
matrix) and validate
Test methods and indicators against available data and other factors
Reconvene all committee members at the end to fully validate the
selected approach (e.g., half‐day roundtable workshop)
Ensure that the outcome of the consultation is captured/disseminated
15
17. Remaining challenges
PAA‐level heterogeneity
• Mixed bag of organizations, branches, divisions, programs, projects and
initiatives with highly diverse:
• Values and culture
• Business processes/settings
• Stakeholder groups
• Always the presence of outlier components
• Inconsistent rationale for inclusion/exclusion of PAA components
• Included on the basis of funding/special allocations and/or politics
• Not always aligned with PAA strategic objectives/PAA logic
PAA‐level instability
• Changes during the evaluation project and evaluation period (and
beyond)
• Multiplies challenges to assess relevance and efficiency/economy
17
18. Remaining challenges
PAA‐level connectivity
• Artificial/conceptual connections across PAA components
• Underestimated connections with other PAA components/contribution of
other PAAs
Other challenges
• PAA‐level recommendations
• Acceptability (what is in it for me?)
• Accountability (who is accountable?)
• Actionability (how they will be put into action?)
• PAA‐level stakeholder fatigue
• Key staff responsible for components under several PAAs are involved in
multiple, concurrent evaluations
18