2. Evaluation
– In-depth, formative/summative assessment of
on-going program/institution/function
– Systematic: addressing explicit criteria
– Informed by evidence: triangulation of multiple
sources of evidence
– Independent: commissioned by, reporting to
oversight, clearance for conflict of interest
– Accountability and learning
IEA
3. Monitoring
– Management responsibility
– Regular observation of program implementation and
results
– Collection and analysis of data for monitoring and
impact assessment
– Management: learning, program adjustment,
supervision, feed-back
– Reporting: accountability
– RBM – current main issue IEA
4. Impact assessment
– Ex post study of cause-effect (attribution)
against development parameters
– From adoption/influence to longer term
outcomes/impacts
– Specialized research and methods
(counterfactual)
– Economic (cost:benefit), social, environmental
– Accountability
– Learning (increasingly) IEA
6. M&E bodies in the CGIAR:
reporting lines and responsibilities
CENTERS
Center
governance
Project
donors
Project M&E
CRPs
Consortium
Office Annual reporting
on CRPs
Fund CouncilISPC
Ex-ante
appraisal
of CRPs
SPIA
Ex-post
Impact
Assessment
IEA
Evaluation of
CRPs and other
CGIAR
institutions
CRP
oversight
7. Evaluation in a timeline
Uptake,
Influence
AdoptionOutcomes
SPIA
Inputs
Progress
CRP
E V A L U A T I O N
Impact
O
u
t
p
u
t
s
Current research- 4-8 years- 5-10 years- 10-20 years
8. Accountability and learning
Evaluation provides accountability Evaluation provides advice for the
enhancement of future effectiveness
Past research
Informs evaluationPAST
EVALUATION
OCCURS IN
PRESENT
FUTURE
Program implementation
Quality
Relevance to user needs
Theories of change
Ambition and risk
Partnerships, gender
Achievements, outputs
Program M&E
Adoption
Outcomes
Impacts
Outcome
and impact
goals
analysis,
adjustment
lessons
Editor's Notes
IEA started drafting the M-E-IA paper first in two parts: IA with SPIA and M with CO
However, primary responsibilities on M and IA – data collection for both purposes and conduct of IA studies is with CRPs and Centers (the latter particularly regarding IA)
IA here refers to ex post
For learning all functions are important
For accountability, currently there is more emphasis on indicators – including on impact and the level where it is pitched is not clear
Evaluation can serve accountability by providing information – not only on progress and results but also on systems, processes and practices that indicate good research management and results potential
Evidence; evaluation of research programs draws from other sources of evidence than just those presented by program or obtained during evaluation.
Global accumulated evidence on which current disciplinary research is based; also evidence on causality along impact pathways
Summative evaluation relies on M and IA information – focuses on results and consequences
Formative focuses on implementation and is learning and improvement-oriented
Current issues:
Adequacy of data and information on performance (monitoring) and results (validated recording, IA)
Use of evaluation in decision-making
How is E perceived and implemented in Centers’ and CRPs’ M&E? Is it impact evaluation only
What does evaluation need from monitoring
Records, systematic and comprehensive operational datasets, progress data, verified narratives
Indicator information?
Bean counting
Observations from evaluations:
M systems (Centers, CRPs) are variable and lacking; not necessarily compatible for systematic generation of program data
Comprehensive data on program components – project level does not exist in all cases. Bilateral project information tends to be Centers’ business
Records – who works for CRP (FTEs); publications
M&E has concentrated on development of indicators;
Milestone and indicator reporting is not necessarily useful or adequate for evaluation;
The basis on which annual milestones and indicators are set or calculated is not very clear. How these relate to impact pathway and program (research) objectives.
Evaluations also assess:
How is monitoring used by management (project/program adjustment, validation/adjustment of Theories of Change)
What does evaluation need from IA
Summative evaluation draws from IA – evaluations do not have resources, time or skills to do results assessment
Verified narratives, impact assessments (validation)
Impacts from research under CRPs from past research
Observations:
Whose role? – IA studies available from Centers (often some SPIA input)
IA is strongest in plant breeding
Historic accumulated impacts (plant breeding); difficult to distill impact cycles, varietal turn over, addressing new needs. But adoption studies provided a sense of what is reaching users now
Claims without evidence; is this because there is a perception that programs need to promise results?
Annual outcome reporting is based on extrapolation (at best), sometimes done on obscure basis
Evaluations need to be sensitive to challenges in impact assessment: non-linear pathways, scale, complexity
Assessment of potential and scalability: impact evaluation and testing (RCTs) give valuable information
Learning from impact assessment – proof of TOC, analysis of adoption and use – or dis-adoption
In the key criteria, M and IA can provide some of the evidence, not all
Interpretation of evidence, including M and IA information is an essential process in evaluation.
Evaluation depends on component evaluation (de-centralised evaluations)
Not all evaluations need to cover all criteria – TOR have been very broad.
At CRP level teams have tended to interpret efficiency as an attribute of management and sustainability as related to the TOC (rather than demonstrated sustaining of benefits)
Till now the M&E systems have focused on M and results tracking; E seems to refer to impacts.
E needs to be defined and included in program plans (II phase guidance seems to be quite explicit about that)
Flow of reporting and evaluative information in the CGIAR
How to make this into a coordinated and harmonized system for decision-making
Components:
Annual monitoring and reporting,
Appraisal per funding cycle
Periodic (4-5 year) evaluation
Infrequent IA
Very long impact pathways are typical
Use and adjustment of TOC becomes important
Positioning CGIAR CRP evaluation in the PRESENT
Current program design, implementation and systems reflect the past and indicate plausibility of the future
Accountability vis a vis past investment and allocation of current investments
Learning for adjustment of current investments reflecting on future opportunities and predictions – possible to assess, difficult to measure