David Fleming held a seminar on monitoring and evaluation in conflict-affected environments at the Post-war Reconstruction and Development Unit (PRDU), University of York.
Evaluating complex change across projects and contexts: Methodological lesson...Itad Ltd
This presentation was given by Florian Schatz and Jeremy Holland at the 2016 UK Evaluation Society (UKES) conference, and at a Centre for Development Impact (CDI) seminar.
Seven Steps to EnGendering Evaluations of Public Health ProgramsMEASURE Evaluation
Because international development increasingly focuses on gender, evaluators need a better understanding of how to measure and incorporate gender—including its economic, social, and health dimensions—in their evaluations. This interactive training, consisting of this presentation and a tool, will help participants learn to better evaluate programs with gender components. Access the tool at https://www.measureevaluation.org/resources/publications/tl-19-40
This presentation has a vivid description of the basics of doing a program evaluation, with detailed explanation of the " Log Frame work " ( LFA) with practical example from the CLICS project. This presentation also includes the CDC framework for evaluation of program.
N.B: Kindly open the ppt in slide share mode to fully use all the animations wheresoever made.
Evaluating complex change across projects and contexts: Methodological lesson...Itad Ltd
This presentation was given by Florian Schatz and Jeremy Holland at the 2016 UK Evaluation Society (UKES) conference, and at a Centre for Development Impact (CDI) seminar.
Seven Steps to EnGendering Evaluations of Public Health ProgramsMEASURE Evaluation
Because international development increasingly focuses on gender, evaluators need a better understanding of how to measure and incorporate gender—including its economic, social, and health dimensions—in their evaluations. This interactive training, consisting of this presentation and a tool, will help participants learn to better evaluate programs with gender components. Access the tool at https://www.measureevaluation.org/resources/publications/tl-19-40
This presentation has a vivid description of the basics of doing a program evaluation, with detailed explanation of the " Log Frame work " ( LFA) with practical example from the CLICS project. This presentation also includes the CDC framework for evaluation of program.
N.B: Kindly open the ppt in slide share mode to fully use all the animations wheresoever made.
Use of Qualitative Approaches for Impact Assessments of Integrated Systems Research: Our Experience - Monica Biradavolu, SPIA. Measuring the Impact of Integrated Systems Research (September 27, 2021 – September 30, 2021). Three-day virtual workshop co hosted by the CGIAR Research Programs on Water Land and Ecosystems (WLE); Forests, Trees and Agroforestry (FTA); Policies, Institutions, and Markets (PIM); and SPIA, the Standing Panel on Impact Assessment of the CGIAR. The workshop took stock of existing and new methodological developments of monitoring, evaluation and impact assessment work, and discussed which are suitable to evaluate and assess complex, integrated systems research.
Impact Evaluation of Policy Oriented Research: why should CIAT engage with it...CIAT
CIAT is increasingly engaging in policy-oriented research (POR). Assessing the impact of research on policy is however methodological challenging. In this presentation we argue that complexity and case-specificity should not prevent us from trying to engage in this new type of Impact Evaluation. We provide some technical ‘tips’ which can support researchers in enhancing their ability to conceptualize better how POR research should be structured in order to be able to document and assess more rigorously the impact/influence of their research on policy process. Empirical and theoretical examples are used throughout to illustrate the discussion
I gave this talk at a Nigeria Health Summit in March 2016. It was an introduction to impact evaluation: what it is, when it's a good idea, and some possible approaches.
The Basics of Monitoring, Evaluation and Supervision of Health Services in NepalDeepak Karki
This presentation has made to health workers who have more than two decades of experience of managing/implementing public health programs in Nepal, especially at district level and below.
Introduction to Evaluation and the role of IEPPECLeonardo ENERGY
Evaluation is an essential tool for policy makers and programme managers; it enables them to understand the difference their work makes and to find ways to improve their effectiveness. The webinar will introduce evaluation approaches and methods and provide practical, step-by-step guidance on how to conduct an evaluation. Attendees will also learn about the experience and support available through the IEPPEC community and how it can help them to conduct better evaluations.
This webinar is suitable for anyone with an interest in evaluation of energy policies and pro-grammes.
Theory-Based Approaches for Assessing the Impact of Integrated Systems Research - Brian Belcher, Royal Roads University. Measuring the Impact of Integrated Systems Research (September 27, 2021 – September 30, 2021). Three-day virtual workshop co hosted by the CGIAR Research Programs on Water Land and Ecosystems (WLE); Forests, Trees and Agroforestry (FTA); Policies, Institutions, and Markets (PIM); and SPIA, the Standing Panel on Impact Assessment of the CGIAR. The workshop took stock of existing and new methodological developments of monitoring, evaluation and impact assessment work, and discussed which are suitable to evaluate and assess complex, integrated systems research.
Peeking behind the test: insights and innovations from the Medical Council of...MedCouncilCan
2015 CCME
MCC Business Session
Peeking behind the test: insights and innovations from the Medical Council of Canada. We will showcase new technological innovations such as the automated item generation, automated scoring and the MCC’s new item bank MOC5.
Planning, monitoring & evaluation of health care programarijitkundu88
this presentation is for the basic idea of planning monitoring and evaluation of health care programs. the details steps of planning is covered. i hope it will help all the persons interested in public health and different health programs.
Design for complexity, using evaluative methodsAnn Larson
Programs can be designed to be more likely to be effective in producing positive change in settings that can be characterized as complex adaptive systems. This presentation describes what we already know about what makes programs more likely to be successful in changing behaviour. Next, it explores the organizational blind spots and human nature which prevent us from making better designs. Finally, it shows how evaluators can guide better program design using standard and emerging methods.
Evaluation amidst complexity: 8 questions evaluators should askAnn Larson
Ever been called into evaluate a project that wasn't working as expected? It might be that the implementers were trying to change behavior within a complex adaptive system. This presentation describes properties of complex adaptive systems, how they affect the success of a project and eight questions an astute evaluator can ask to help improve outcomes. Be sure to check the notes for more information.
Use of Qualitative Approaches for Impact Assessments of Integrated Systems Research: Our Experience - Monica Biradavolu, SPIA. Measuring the Impact of Integrated Systems Research (September 27, 2021 – September 30, 2021). Three-day virtual workshop co hosted by the CGIAR Research Programs on Water Land and Ecosystems (WLE); Forests, Trees and Agroforestry (FTA); Policies, Institutions, and Markets (PIM); and SPIA, the Standing Panel on Impact Assessment of the CGIAR. The workshop took stock of existing and new methodological developments of monitoring, evaluation and impact assessment work, and discussed which are suitable to evaluate and assess complex, integrated systems research.
Impact Evaluation of Policy Oriented Research: why should CIAT engage with it...CIAT
CIAT is increasingly engaging in policy-oriented research (POR). Assessing the impact of research on policy is however methodological challenging. In this presentation we argue that complexity and case-specificity should not prevent us from trying to engage in this new type of Impact Evaluation. We provide some technical ‘tips’ which can support researchers in enhancing their ability to conceptualize better how POR research should be structured in order to be able to document and assess more rigorously the impact/influence of their research on policy process. Empirical and theoretical examples are used throughout to illustrate the discussion
I gave this talk at a Nigeria Health Summit in March 2016. It was an introduction to impact evaluation: what it is, when it's a good idea, and some possible approaches.
The Basics of Monitoring, Evaluation and Supervision of Health Services in NepalDeepak Karki
This presentation has made to health workers who have more than two decades of experience of managing/implementing public health programs in Nepal, especially at district level and below.
Introduction to Evaluation and the role of IEPPECLeonardo ENERGY
Evaluation is an essential tool for policy makers and programme managers; it enables them to understand the difference their work makes and to find ways to improve their effectiveness. The webinar will introduce evaluation approaches and methods and provide practical, step-by-step guidance on how to conduct an evaluation. Attendees will also learn about the experience and support available through the IEPPEC community and how it can help them to conduct better evaluations.
This webinar is suitable for anyone with an interest in evaluation of energy policies and pro-grammes.
Theory-Based Approaches for Assessing the Impact of Integrated Systems Research - Brian Belcher, Royal Roads University. Measuring the Impact of Integrated Systems Research (September 27, 2021 – September 30, 2021). Three-day virtual workshop co hosted by the CGIAR Research Programs on Water Land and Ecosystems (WLE); Forests, Trees and Agroforestry (FTA); Policies, Institutions, and Markets (PIM); and SPIA, the Standing Panel on Impact Assessment of the CGIAR. The workshop took stock of existing and new methodological developments of monitoring, evaluation and impact assessment work, and discussed which are suitable to evaluate and assess complex, integrated systems research.
Peeking behind the test: insights and innovations from the Medical Council of...MedCouncilCan
2015 CCME
MCC Business Session
Peeking behind the test: insights and innovations from the Medical Council of Canada. We will showcase new technological innovations such as the automated item generation, automated scoring and the MCC’s new item bank MOC5.
Planning, monitoring & evaluation of health care programarijitkundu88
this presentation is for the basic idea of planning monitoring and evaluation of health care programs. the details steps of planning is covered. i hope it will help all the persons interested in public health and different health programs.
Design for complexity, using evaluative methodsAnn Larson
Programs can be designed to be more likely to be effective in producing positive change in settings that can be characterized as complex adaptive systems. This presentation describes what we already know about what makes programs more likely to be successful in changing behaviour. Next, it explores the organizational blind spots and human nature which prevent us from making better designs. Finally, it shows how evaluators can guide better program design using standard and emerging methods.
Evaluation amidst complexity: 8 questions evaluators should askAnn Larson
Ever been called into evaluate a project that wasn't working as expected? It might be that the implementers were trying to change behavior within a complex adaptive system. This presentation describes properties of complex adaptive systems, how they affect the success of a project and eight questions an astute evaluator can ask to help improve outcomes. Be sure to check the notes for more information.
Laura Eyre and Martin Marshall: Researchers in residence Nuffield Trust
Laura Eyre, Research Associate and Martin Marshall, Professor of Healthcare Improvement at UCL give an inside perspective on moving improvement research closer to practice.
Evaluation of SME and entreprenuership programme - Jonathan Potter & Stuart T...OECD CFE
Presentation by Jonathan Potter, OECD LEED Senior Policy Analyst, and Stuart Thompson, OECD LEED Policy Analys, tat the seminar organised by the OECD LEED Trento Centre for the Officers of the Autonomous Province of Trento on 13 November 2015.
https://www.trento.oecd.org
Measuring the impact of integrated systems research
Panel Speakers: Vincent Gitz, Natalia Estrada Estrada Carmona, Monica Biradavolu and Karl Hughes. Measuring the Impact of Integrated Systems Research (September 27, 2021 – September 30, 2021). Three-day virtual workshop co hosted by the CGIAR Research Programs on Water Land and Ecosystems (WLE); Forests, Trees and Agroforestry (FTA); Policies, Institutions, and Markets (PIM); and SPIA, the Standing Panel on Impact Assessment of the CGIAR. The workshop took stock of existing and new methodological developments of monitoring, evaluation and impact assessment work, and discussed which are suitable to evaluate and assess complex, integrated systems research.
Measuring cases of social innovation using Qualitative Comparative Analysis: ...Peter Oeij
How QCA (qualitative comparative analysis) was used in the study of 82 social innovation cases, based on an article in Journal of Business Research, 2019, 101: 243-254 (Oeij et al.)
‘Measuring cases of social innovation using Qualitative Comparative AnalysisBEYOND4.0
This study is based on cases of the EU project “Social Innovation: Drivers of social change” (2014-2017), and applied the method Qualitative Comparative Analysis (QCA), which is also intended to be applied in WP8 of BEYOND4.0.
Monitoring & Evaluation of National Adaptation: Key challenges and emerging s...NAP Global Network
Presented by Julie Dekens, IISD/NAP Global Network, in September 2020 at the Virtual Learning Event on Monitoring and Evaluation (M&E) for National Adaptation in Pacific Small Island Developing States organized by organized by the NAP Global Network in collaboration with the Pacific Resilience Partnership (PRP)
Qualitative Research in Results-Based Financing: The Promise and The RealityRBFHealth
A presentation by Kerina Kielmann and Fabian Cataldo, delivered at the RBF Health Seminar, Qualitative Research in RBF: The Promise and The Reality on February 18, 2015.
Explore our comprehensive data analysis project presentation on predicting product ad campaign performance. Learn how data-driven insights can optimize your marketing strategies and enhance campaign effectiveness. Perfect for professionals and students looking to understand the power of data analysis in advertising. for more details visit: https://bostoninstituteofanalytics.org/data-science-and-artificial-intelligence/
Adjusting primitives for graph : SHORT REPORT / NOTESSubhajit Sahu
Graph algorithms, like PageRank Compressed Sparse Row (CSR) is an adjacency-list based graph representation that is
Multiply with different modes (map)
1. Performance of sequential execution based vs OpenMP based vector multiply.
2. Comparing various launch configs for CUDA based vector multiply.
Sum with different storage types (reduce)
1. Performance of vector element sum using float vs bfloat16 as the storage type.
Sum with different modes (reduce)
1. Performance of sequential execution based vs OpenMP based vector element sum.
2. Performance of memcpy vs in-place based CUDA based vector element sum.
3. Comparing various launch configs for CUDA based vector element sum (memcpy).
4. Comparing various launch configs for CUDA based vector element sum (in-place).
Sum with in-place strategies of CUDA mode (reduce)
1. Comparing various launch configs for CUDA based vector element sum (in-place).
1. Monitoring and Evaluation in
Fragile and Conflict-Affected Contexts:
The challenges of measurement
David Fleming, Senior Consultant, Itad
Date: 28th January 2015
2. Seminar Outline
1. Introducing Itad: Life as an M&E consultant
2. Introducing/recapping M&E: Why monitor
and evaluate and why important in FCAS?
3. Theories of change: what they are, why they
are useful and challenges in FCAS
4. M&E approaches and methods: how to
monitor and evaluate in FCAS; examples from
peacebuilding and humanitarian work
3. Learning objectives
1. Come away with a better understanding of why we
do M&E and why it’s particularly important in FCAS
2. Learn about and put into practice some of the most
important M&E methods and tools for FCAS
3. Be able to better identify the challenges of doing
M&E in FCAS and how to overcome these
4. Everyone to leave the room with a burning desire to
get involved in M&E at some point in the future!
6. 2. Introducing M&E:
Why Monitor and Evaluate?
“After decades in which development agencies have
disbursed billions of dollars for social programs, and
developing country governments and nongovernmental
organizations (NGOs) have spent hundreds of billions
more, it is deeply disappointing to recognize that we
know relatively little about the net impact of most of
these social programs”
‘When will we ever learn?’ Evaluation Gap Working
Group, Center for Global Development 2006
7. • Monitoring: “Collection of data with which managers can
assess extent to which objectives are being achieved” (World
Bank)
– Purpose: Collect information on programme outputs and
outcomes to track and improve performance and results
• Evaluation: “Determination of the value of a project,
programme or policy” (World Bank)
– Purpose: evidence-based decisions, accountability,
transparency, lesson learning
– Types: project, programme, policy, organisation, sector, theme,
formative, summative, impact…
8. Why is M&E important in FCAS?
1. Development trends in FCAS
• By 2015, 50% of world’s poor will live in fragile states
(OECD); by 2030 it might be two thirds (Brookings)
• Support to conflict, violence and fragility becoming a key
priority for most major donors
• ODA to fragile states is falling in quantity but number of
actors multiplying (OECD)
• DFID has been scaling up support to FCAS (commitment
to increase to 30% of ODA by 2015)
• DFID strategies include BSOS, cross-Whitehall CSSF, and
the ‘Beyond Aid’ agenda
9. Why is M&E important in FCAS?
2. Increasing emphasis on transparency,
accountability and fiduciary risk
• Higher risk to investments in terms of results,
security and fiduciary risk
3. More limited evidence base – need for lesson
learning and evidence of what works
• Support evidence-informed decisions and better
programming by knowing what works and
doesn’t and why and in which contexts
10. What are the biggest challenges?
Risk of
exacerbating
conflict
Hawthorne
effect
Insecurity
Political
objectives
Longer-term
nature of results
Measurement
challenges
Vulnerability to
biases
Lack of existing
data
Poor data
reliability
Poor data
accessibility
Unpredictable
chains of
causation
Complex and
dynamic
contexts
11. M&E within the programme cycle
Identification
Problem analysis
Appraisal
Evidence of what
works
Design
Most cost-
effective
intervention/s
Implementation
With M&E built in
from outset
Completion
Measure results
– did it work?
Post
Completion
Feed lessons
into future
decisions
Lesson Learning
and
Feedback
12. Challenges of programming in FCAS
Identification
Problem analysis
contested
Appraisal
Little robust
data and
research. No
time
Design
Little evidence
to assess cost
effectiveness.
Political
imperatives
Implementation
Great hurry. M&E lags
behind. No baselines/
measurement
strategies
Completion
Not enough data
to say. No
inclination to
admit failure
Post
Completion
Not enough
results
published/
stored/
synthesisd.
Disagreement No knowledge
management/sharing and
lots of uncoordinated
actors
14. Why are ToCs useful for M&E?
A ToC is an iterative and collaborative process for thinking through how a
programme is expect to work within the context of the broader system. It
should create the space for critical reflection and learning and be adjusted
and iterated over time.
• Links to assumptions box in LF, but goes beyond this in focusing on iterating
through learning shared mental models of how change happens
• Important for developing M&E strategy – test key links and assumptions
(intellectual leaps) in the causal chain over the life of the programme
• Important for evaluability – provides foundation for a theory-based evaluation
• Important to talk of ‘theories’ not ‘theory’ – i.e. to recognise and manage a
range of theories and multiple drivers of change
• Not a tick-box exercise or management tool like the LF but a way of working
and thinking – it’s primarily a process rather than a product
15. What are the pitfalls in FCAS?
• Time and resource-consuming – so they can often be poorly conceived/
too vague
• Poorly understood/used – as linear tick-box exercise rather than iterative
approach
• Oversimplification of complex contextual (e.g. conflict) factors – reflexivity
and feedback loops in complex conflict systems – black swan idea
• Absence of/poor conflict analysis – must underpin project design
• Difficulties in evidence gathering/data collection – conflict environments
are often data rich but information poor – insecurity, staff turnover
• Difficulties of working with and aiming to influence a range of actors
• Unpacking chains of cause and effect in FCAS can be very difficult
• Death by diagram
• Funnel of attrition
16. The funnel of attritionOnly these people
may experience
improved outcomes
17. 4. M&E approaches and methods
Recent explosion of new and innovative
approaches to monitoring and evaluation:
1. Use of mobile technology and ICTs for data
collection and analysis – e.g. Ushahidi
2. Influence of complexity science – PDIA, DDD –
enabling environment for experimentation
3. Remote monitoring and verification
4. Rigorous evaluation/impact evaluation designs
18.
19. Why evaluate?
• White and Waddington (2012):
‘The use of the systematic reviews methodology is
comparatively new among social scientists in the
international development field, but has grown
rapidly in the last 3 years...To date, there has not
been a strong tradition of using rigorous evidence in
international development. The evidence bar has
been rather low, with many policies based on
anecdote and ‘cherry picking’ of favourable cases’.
20. Why evaluate?
• Accountability and lesson-learning
– Accountability to taxpayers and beneficiaries
– Understanding what works, why, where and for whom
to underpin evidence-based programming
– Priority to evaluate interventions with a weak
evidence base
• Inform scale up of an intervention or transfer to
another context
• Make mid-course corrections
• To support spending decisions
21. What is impact evaluation?
“Impact evaluation is a with versus without analysis: what happened with the
programme (a factual record) compared to what would have happened in the
absence of the programme (which requires a counterfactual)” (White, 2013)
“Impact evaluation aims to demonstrate that development programmes lead
to development results, that the intervention has a cause and effect” (Stern
et al. 2012)
• Attribution analysis to understand what difference a programme made
• Counterfactual construction through experimental/quasi-experimental
methods for large n (comparison groups); causal chain analysis for small n
• Theory-based impact evaluation – in ideal world, an RCT should be
embedded in a broader theory-based design that addresses questions
across the causal chain (White, 2013)
• Causal chain analysis – rigorous empirical assessment of causal
mechanisms and the assumptions that underlie the causal chain
24. Pros and cons of RCTs
• Pros: RCTs are the “gold standard” for addressing
attribution when an ex ante design is possible with a
large number of units of assignment
• BUT MAJOR DRAWBACKS, ESPECIALLY IN FCAS
– Not suited to complex development pathways with
multiple non-linear causal factors
– Less appropriate where hard to identify comparison
groups – threat to validity
– When extrapolated from their context, RCT findings
lose claims to rigour (Pritchett and Sandefur, 2013)
25. How best to evaluate in FCAS?
In increasing order of robustness:
• Use of evaluation framework and robust
approach to evidence assessment – e.g.
humanitarian evaluations
• Use of theories of change and contribution
analysis to test causation and assumptions
• Realist evaluation design looking at how
different mechanisms operate in contexts
26. Using an evaluation framework
Questions
Theory/
Approach
Methods
Tools
Establishing a framework for the evaluation provides a
consistent and systematic means to designing the
evaluation, collating and analysing the existing evidence
and the new data created, and generating and
interpreting the results. (Magenta Book para 6.1)
28. Evaluating peacebuilding
• Most useful definition of impact – understand
effects of intervention on conflict drivers
• Conflict analysis is critical – understand/test
relevance of intervention to conflict drivers
• Use of ToC to understand/test assumptions
about how intervention contributes to change
• Experimental approaches usually not useful –
better to look at contribution
29. M&E Group Exercise
• Split into 4 groups
• 2 groups will be responsible for designing an
outline M&E system for a peacebuilding
programme
• 2 groups will be responsible for designing an
outline proposal to do an external evaluation
of the same programme
30. Further Reading
Literature on M&E approaches and methods
• L. Morra Imas, Rist, R., The Road to Results (World Bank, 2009)
• S. Funnell, Rogers, P., Purposeful Program Theory (Wiley, 2011)
• E. Stern et al., ‘Broadening the range of designs and methods for impact evaluation’, DFID
working paper 38, April 2012
• H. White, Phillips, D., ‘Addressing Attribution of cause and effect in small n impact
evaluations’, 3ie Working Paper 15, June 2012
• G.Westhorp, ‘Realist impact evaluation: an introduction’, September 2014
Literature on M&E with specific reference to FCAS
• DFID, ‘Results in Fragile and Conflict-affected States and Situations’, 2012
• DFID, ‘Back to Basics, A compilation of best practices in design, monitoring and evaluation in
fragile and conflict-affected environments,’ March 2013
• L. Schreter, Harmer, A., Delivering Aid in Highly Insecure Environments, 2013
• S. Herbert, ‘Perceptions surveys in fragile and conflict-affected states’, GSDRC Helpdesk
Research Report, March 2013
• DFID, ‘Evaluating impacts of peacebuilding interventions’, May 2014
• J. Puri et al. ‘What methods may be used in impact evaluations of humanitarian assistance’,
3ie working paper 22, December 2014
31. Thank you for listening - any questions?
david.fleming@itad.com