Learning from Evaluation

Bruce Britton
and Olivier Serrat
2013

The views expressed in this presentation are the views of...
Define: Monitoring and Evaluation
According to the Organisation for Economic Co-operation
and Development
• Monitoring is ...
The Planning, Monitoring, and
Evaluation Triangle
Planning
Recommendations
for future planning

Plans show what needs
to b...
Main Types of Evaluation

Summative
or Formative
Evaluation

Self- or
Independent
Evaluation

Internal or
External
Evaluat...
The Results Chain
Impact

Outcome

Inputs

Activities

Outputs
Outputs, Outcomes, Impacts
Outputs—The
products,
capital goods,
and services
that result from
a project; they
may also
inc...
OECD-DAC Evaluation Criteria
Relevance —Examines the extent to which the objectives of a project
matched the priorities or...
The Results Chain and the OECD-DAC
Evaluation Criteria
Needs

Impact

Outcome

Objective

Inputs

Activities

Relevance

E...
Challenges and Limits to
Management
Degree of
Control

Logic

Impact

Outputs
Activities
Inputs

What the
project can be
W...
Indicators

An indicator is a
quantitative or
qualitative factor or
variable that offers
a means to measure
accomplishment...
Planning and the Use of
Logic Models
• In development assistance, most projects are planned using
logic models such as the...
The Limitations of Logic Models
Usually assume simple, linear cause-effect development relationships
Overlook or undervalu...
Purposes of Evaluation
• To provide a basis for
accountability, including
the provision of
information to the public

• To...
Does Evaluation Have to Be Either/Or?
Evaluation for
Accountability

Evaluation for
Accountability

Evaluation for
Learnin...
What is Accountability?
Accountability is the obligation to demonstrate
that work has been conducted in compliance with
ag...
Evaluation for Accountability
Relates to
standards, roles,
and plans

Is shaped by
reporting
requirements

Focuses on
effe...
What is Learning?
• Learning is the acquisition of knowledge or skills through
instruction, study, and experience.
• Learn...
Evaluation for Learning
Recognizes the difference an
organization has made

Understands how the
organization has helped to...
The Experiential Learning Cycle
Evaluation for Accountability
and Evaluation for Learning
Item

Evaluation for Accountability

Evaluation for Learning

Ba...
Evaluation for Accountability
and Evaluation for Learning
Item

Evaluation for Accountability

Evaluation for Learning

St...
Both/And?
• Knowledge
creation;
generating
generalizable
lessons
Learning

Accountability

• Reporting;
ensuring
complianc...
Programs Should Be Held
Accountable For
Asking difficult questions
Maintaining a focus on outcome
Identifying limitations,...
What is Feedback?

Evaluation feedback is a dynamic process that
involves the presentation and dissemination of
evaluation...
Actions to Improve the Use
of Evaluation Feedback
Understand how learning happens within and outside an
organization
Ident...
Who Can Learn from Evaluation?
The wider community
People who are or will be planning, managing, or executing similar
proj...
Why We Need a Learning Approach
to Evaluation

Learning should be
at the core of
every organization
to enable
adaptability...
How Can Stakeholders Contribute
to Learning from Evaluation?
Help design the terms of reference for the evaluation
Be invo...
What is a "Lesson"?

Lessons learned are findings and conclusions
that can be generalized beyond the
evaluated project. In...
What is Needed to Learn a "Lesson"?

Identify: was
there a
difference
between
what was
Reflect:
planned and
what
happened?...
What Influences Whether a Lesson is
Learned?
Political Factors
Inspired Leadership

The Quality of the Lesson
Access to th...
Quality Standards
for Evaluation Use and Learning

The evaluation is
designed,
conducted, and
reported to
meet the needs
o...
Monitoring and Evaluation Systems as
Institutionalized Learning
Learning must be incorporated into
the overall management ...
A Learning Approach to Evaluation

In development assistance, the overarching
goal for evaluation is to foster a transpare...
Eight Challenges Facing
Learning-Oriented Evaluations
The inflexibility of logic
models

The demands for
accountability an...
Focus of the Terms of Reference for
an Evaluation
Evaluation Purpose
Project Background

Stakeholder Involvement
Evaluatio...
Building Learning into the Terms of
Reference for an Evaluation
Make the drafting of
the terms of reference
a participator...
Why Questions Are the
Heart of Evaluation for Learning

Learning is
best
stimulated
by seeking
answers to
questions

Quest...
Criteria for Useful Evaluation
Questions
Data can be used to answer each question
There is more than one possible answer t...
Utilization-Focused Evaluation
Utilization-focused evaluation is done for and with
specific intended primary users for spe...
The Stages of Utilization-Focused
Evaluation
1. Identify
primary
intended users

4. Analyze and
interpret findings,
reach ...
Potential Evaluation Audiences
Program
Staff

Media

Program
Managers

Policy
Makers
Board
Members
Program
Funders

Progra...
Target
Audiences
for
Evaluation
Feedback
Typology of Evaluation Use
Conceptual Use of Evaluation
Genuine Learning

• Conceptual use is about generating knowledge in and
understanding of a gi...
Instrumental Use of Evaluation
Practical Application

• The evaluation directly affects decision-making and
influences cha...
Process Use of Evaluation
Learning by Doing
• Process use concerns how individuals and
organizations are impacted as a res...
Symbolic Use of Evaluation
Purposeful Non-Learning

• Symbolic use means that evaluations are undertaken
to signify the pu...
Political Use of Evaluation

Learning is Irrelevant

• Evaluation occurs after key decisions have been taken.
The evaluati...
Factors That Affect Utilization
Relevance of the
findings,
conclusions, and
recommendations

Credibility of the
evaluators...
Obstacles to Learning from Evaluation

Individual
Defense
Mechanisms

• Immediate personal reaction to feedback
that threa...
Obstacles to Learning from Evaluation

Organizational
Dilemmas

• Do organizational culture, structures,
policies, or proc...
Enhancing Learning from Evaluation
For Individual Evaluations
• Select topics that are relevant to your intended audiences...
Enhancing Learning from Evaluation
For Individual Evaluations
• The evaluation needs to be credible in the eyes of users a...
Enhancing Learning from Evaluation
For Individual Evaluations
• Good recommendations are technically, politically,
adminis...
Enhancing Learning from Evaluation

Institutionalizing Monitoring and Evaluation Systems

• To enhance the prospects of le...
Monitoring and Evaluation:
Conventional and Narrative
Conventional
• Deductive—about expected
outcomes
• Indicators often ...
What is Required of Today's
Evaluations
Involve
Stakeholders
Build
Evaluation
Capacity

Design
Evaluations to
Enhance Use
...
Why Use a Narrative
(Story-Based) Approach?
Stories can
convey
difficult
messages

People
remember
stories

Stories
provid...
What is Most Significant Change?

The Most Significant
Change technique is a
qualitative and
participatory form of
monitor...
The Most Significant Change Cycle

Project

Changes
in Peoples'
Lives

Learning

Action

Stories

Improved
Project
The Core of the Most Significant
Change Technique
A question: "In your opinion,
what was the most significant
change that ...
What Makes Most Significant Change
Different
Those participating
have a choice about
what sort of
information to collect.
...
The 10 Steps of the Most Significant
Change Technique
1. Get started,
establish champions,
and familiarize

8. Quantify

2...
Selecting Significant Change Stories
Staff read through and identify the most significant of all the
submitted significant...
How to Use the Most Significant Change
Technique

Not as a
stand-alone
method

To identify
unexpected
changes
Alongside
in...
The Conventional Problem-Focused
Approach to Evaluation
Identify the issues
or problems
Determine the root
causes
Generate...
Problem-Focused Approach—
Assumptions
• There is some ideal way for things to be (usually determined
by the logic model).
...
Unintended Consequences
of Problem-Focused Approaches
• Fragmented responses—lack of holistic overview
• Necessary adaptat...
Appreciative Inquiry
Ap-pre'ci-ate, v.
• Recognize the quality, significance or magnitude of
• Recognizing the best in som...
What is Appreciate Inquiry?
• Appreciative inquiry builds on learning from what is working
well rather than focusing on "f...
Appreciative Inquiry and Evaluation
• Appreciative inquiry helps identify and value what is working
well in a project and ...
What to grow

What to fix

New grammar of the true,
good, better, possible

Underlying grammar =
problem, symptoms,
causes...
The Appreciative Inquiry Process
—The 5-Ds or 5-Is
1. Definition:
Frame the
inquiry (Initiate)
5. Destiny: How
to make wha...
Example Starter Questions for
Appreciative Inquiry
• Think back on your time with this project. Describe a high
point or e...
Appreciative Inquiry Can Enrich
Evaluation When …
The organization is interested in using participatory and
collaborative ...
The Nature of Development
Complex: involves a mix of actors and factors
Indeterminate: independent of the duration of a pr...
Challenges in Evaluating Development
Interventions
Establishing cause
and effect in open
systems

Measuring what
did not h...
A Critical Look at Logic Models
Lack of flexibility
Clarify objectives and how
they will be achieved
Make explicit the
ass...
Outcome Mapping
Outcome mapping is an approach to planning, monitoring, and
evaluating social change initiatives.
Outcome ...
Outcome Mapping Can Help …
Understand and influence human well-being
Plan and measure social change
Foster social and orga...
The Three Key Concepts
of Outcome Mapping

Sphere of
Influence

Boundary
Partners

Outcomes
Understood as
Changes in
Behav...
There is a Limit to Our Influence

Project

Sphere
of Control

Partners

Sphere
of Influence

Beneficiaries

Sphere
of Int...
There is a Limit to Our Influence

Inputs

Sphere
of Control

Outputs

Outcomes

Sphere
of Influence

Impact

Sphere
of In...
Focus of Outcome Mapping

Inputs

Activities

Outputs

Outcome

Outcome Mapping

Impact
Boundary Partners

Project

Beneficiaries

Stakeholders
Boundary Partners
Boundary Partner Example

Farmers
participate
in field trials
Participatory
research on
demonstration
plots to reduce use
...
The Problem with Impact
Impact Implies …

The Reality is …

• Cause and effect
• Positive, intended
results

• Open system...
The Principles of Outcome Mapping

Continuous
Learning and
Flexibility
Actor-Centered
Development
and Behavioral
Change

P...
Three Stages of Outcome Mapping
Intentional Design

• Step 1: Vision
• Step 2: Mission
• Step 3: Boundary Partners
• Step ...
When Does Outcome Mapping
Work Best?
When working in partnership

When building capacity
When a deeper understanding of so...
Tips for Introducing Outcome
Mapping

Focus on
timing

Use it flexibly

Use it to
manage
shifts in
organizatio
nal culture...
Learning and Project Failure
Stage
Preparation

Category
Failures of intelligence: not knowing enough at the early stages ...
Competencies for Knowledge
Management and Learning
Strategy
Development

A strategy is a long-term plan of action to achie...
Knowledge Solutions for Knowledge
Management and Learning
Strategy Development
• Behavior and change; emergence and scenar...
Developing Evaluation Capacity
Capacity is the ability of
people, organizations, and
society to manage their
affairs succe...
Why Develop Evaluation Capacity?
Stronger evaluation capacity will help development agencies
• Develop as a learning organ...
Using Knowledge Management
for Evaluation
Evaluation findings only add value when they are used, so:
• Make evaluation fin...
How to Share Findings
from Evaluations
To increase the chances evaluation findings will be used they must be
shared widely...
Characteristics
of a Good Knowledge Product
A good knowledge product is
•
•
•
•
•
•
•

Related to what users want
Designed...
Further Reading
• ADB. 2008. Output Accomplishment and the Design and
Monitoring Framework. Manila. Available:
www.adb.org...
Further Reading
• ADB. 2009. The Most Significant Change Technique. Manila.
Available: www.adb.org/publications/most-signi...
Further Reading
• ADB. 2010. Harvesting Knowledge. Manila. Available:
www.adb.org/publications/harvesting-knowledge
• ADB....
Further Reading
• ADB. 2010. Harvesting Knowledge. Manila. Available:
www.adb.org/publications/harvesting-knowledge
• ADB....
Further Reading
• Scott Bayley. 2008. Maximizing the Use of Evaluation Findings.
Manila.
• Rick Davies and Jess Dart. 2004...
Further Reading
• Ollie Bakewell and Anne Garbutt. 2005. The Use and Abuse of
the Logical Framework Approach. Sida.
• Step...
Further Reading
• OECD. Undated. Evaluating Development Co-Operation:
Summary of Key Norms and Standards.
• Michael Quinn ...
Further Reading
• Sida. 2004. Looking Back, Moving Forward. Sida Evaluation
Manual. Sida.
• UNDP. 2009. Handbook on Planni...
Further Reading
• Jim Woodhill. 2005. M&E as Learning: Re-Thinking the
Dominant Paradigm. In Monitoring and Evaluation of ...
Videos
• Jess Dart. Most Significant Change, Part 1. Available:
www.youtube.com/watch?v=H32FTygl-Zs&feature=related
• Jess...
Videos
• Sarah Earl. Introduction to Outcome Mapping, Part 2.
Available: www.youtube.com/watch?v=a9jmD-mC2lQ&NR=1
• Sarah ...
Bruce Britton
Organizational Learning Specialist
Framework
bruce@framework.org.uk
www.framework.org.uk

Olivier Serrat
Pri...
Upcoming SlideShare
Loading in...5
×

Learning from Evaluation

2,454

Published on

Evaluation serves two main purposes: accountability and learning. Development agencies have tended to prioritize the first, and given responsibility for that to centralized units. But evaluation for learning is the area where observers find the greatest need today and tomorrow. A learning approach to evaluation looks to designing evaluation with learning in mind.

1 Comment
3 Likes
Statistics
Notes
No Downloads
Views
Total Views
2,454
On Slideshare
0
From Embeds
0
Number of Embeds
3
Actions
Shares
0
Downloads
134
Comments
1
Likes
3
Embeds 0
No embeds

No notes for slide

Learning from Evaluation

  1. 1. Learning from Evaluation Bruce Britton and Olivier Serrat 2013 The views expressed in this presentation are the views of the author/s and do not necessarily reflect the views or policies of the Asian Development Bank, or its Board of Governors, or the governments they represent. ADB does not guarantee the accuracy of the data included in this presentation and accepts no responsibility for any consequence of their use. The countries listed in this presentation do not imply any view on ADB's part as to sovereignty or independent status or necessarily conform to ADB's terminology.
  2. 2. Define: Monitoring and Evaluation According to the Organisation for Economic Co-operation and Development • Monitoring is the systematic and continuous assessment of progress of a piece of work over time which checks that things are going according to plan and enables positive adjustments to be made. • Evaluation is the systematic and objective assessment of an ongoing or completed project, program, or policy, its design and implementation. • The aim of evaluation is to determine the relevance and fulfillment of objectives, effectiveness, efficiency, impact, and sustainability. • An evaluation should provide information that is credible and useful, enabling the incorporation of lessons learned into decision-making processes.
  3. 3. The Planning, Monitoring, and Evaluation Triangle Planning Recommendations for future planning Plans show what needs to be monitored Plans show what to evaluate Evaluation highlights areas that need close monitoring Evaluation Monitoring provides data to be used in evaluation Monitoring revises plans during project implementation Monitoring
  4. 4. Main Types of Evaluation Summative or Formative Evaluation Self- or Independent Evaluation Internal or External Evaluation Program Evaluation (Geographic or Thematic) Project Evaluation Impact Evaluation Real-Time Evaluation A quality evaluation should provide credible and useful evidence to strengthen accountability for results or contribute to learning processes, or both.
  5. 5. The Results Chain Impact Outcome Inputs Activities Outputs
  6. 6. Outputs, Outcomes, Impacts Outputs—The products, capital goods, and services that result from a project; they may also include changes resulting from the project that are relevant to the achievement of its outcome. Outcomes—The likely or achieved short-term and medium-term effects of a project's outputs. Impacts—The positive and negative, primary and secondary, longterm effects produced by a project, directly or indirectly, intended or unintended.
  7. 7. OECD-DAC Evaluation Criteria Relevance —Examines the extent to which the objectives of a project matched the priorities or policies of major stakeholders (including beneficiaries) Effectiveness —Examines whether outputs led to the achievement of the planned outcome Efficiency—Assesses outputs in relation to inputs Impact —Assesses what changes (intended and unintended) have occurred as a result of the work Sustainability—Looks at how far changes are likely to continue in the longer term
  8. 8. The Results Chain and the OECD-DAC Evaluation Criteria Needs Impact Outcome Objective Inputs Activities Relevance Efficiency Effectiveness Sustainability Outputs
  9. 9. Challenges and Limits to Management Degree of Control Logic Impact Outputs Activities Inputs What the project can be What is within expected to achieve and the direct be control of the accountable project's for management Increasing Difficulty Outcome Decreasing Control What the project is expected to contribute to Challenge of Monitoring and Evaluation
  10. 10. Indicators An indicator is a quantitative or qualitative factor or variable that offers a means to measure accomplishment, reflects the changes connected with a project, and helps assess performance. Indicators do not provide proof so much as a reliable sign that the desired changes are happening (or have happened). It is important not to confuse indicators with outputs, outcomes, or impacts. Achieving the expected change in the indicators should not become the main purpose of a project.
  11. 11. Planning and the Use of Logic Models • In development assistance, most projects are planned using logic models such as the logical framework (logframe). • Logic models provide a systematic, structured approach to the design of projects. • Logic models involve determining the strategic elements (inputs, outputs, outcome, and impact) and their causal relationships, indicators, and the assumptions or risks that may influence success or failure. • Logic models can facilitate the planning, implementation, and evaluation of projects; however, they have significant limitations that can affect the design of evaluation systems.
  12. 12. The Limitations of Logic Models Usually assume simple, linear cause-effect development relationships Overlook or undervalue unintended or unplanned outcomes Do not make explicit the theory of change underlying the initiative Do not cope well with multi-factor, multi-stakeholder processes Undervalue creativity and experimentation in the pursuit of long-term, sustainable impact (the "lockframe" problem) Encourage fragmented rather than holistic thinking Require a high level of planning capacity
  13. 13. Purposes of Evaluation • To provide a basis for accountability, including the provision of information to the public • To improve the development effectiveness of future policies, strategies, and operations through feedback of lessons learned Accountability Learning
  14. 14. Does Evaluation Have to Be Either/Or? Evaluation for Accountability Evaluation for Accountability Evaluation for Learning Evaluation for Learning
  15. 15. What is Accountability? Accountability is the obligation to demonstrate that work has been conducted in compliance with agreed rules and standards or to report fairly and accurately on performance results vis-à-vis mandated roles and/or plans. This may require a careful, even legally defensible, demonstration that the work is consistent with the contract aims. Accountability is about demonstrating to donors, beneficiaries, and implementing partners that expenditure, actions, and results are as agreed or are as can reasonably be expected in a given situation.
  16. 16. Evaluation for Accountability Relates to standards, roles, and plans Is shaped by reporting requirements Focuses on effectiveness and efficiency Measures outputs and outcomes against original intentions Has a limited focus on the relevance and quality of the project Overlooks unintended outcomes (positive and negative) Concerns mostly single-loop learning
  17. 17. What is Learning? • Learning is the acquisition of knowledge or skills through instruction, study, and experience. • Learning is driven by organization, people, knowledge, and technology working in harmony—urging better and faster learning, and increasing the relevance of an organization. • Learning is an integral part of knowledge management and its ultimate end. Data Information Know What Reductionist Knowledge Wisdom Know How Know Why Systemic
  18. 18. Evaluation for Learning Recognizes the difference an organization has made Understands how the organization has helped to make a difference Explores assumptions specific to each component of a project Shares the learning with a wide audience
  19. 19. The Experiential Learning Cycle
  20. 20. Evaluation for Accountability and Evaluation for Learning Item Evaluation for Accountability Evaluation for Learning Basic Aim The basic aim is to find out about the past. The basic aim is to improve future performance. Emphasis Emphasis is on the degree of success or failure. Emphasis is on the reasons for success or failure. Favored by Parliaments, treasuries, media, Development agencies, pressure groups developing countries, research institutions, consultants Selection of Topics Topics are selected based on random samples. Topics are selected for their potential lessons. Status of Evaluation Evaluation is an end product. Evaluation is part of the project cycle.
  21. 21. Evaluation for Accountability and Evaluation for Learning Item Evaluation for Accountability Evaluation for Learning Status of Evaluators Evaluators should be impartial and independent. Evaluators usually include staff members of the aid agency. Importance of Data from Evaluations Data are only one consideration. Data are highly valued for the planning and appraising of new development activities. Importance of Feedback Feedback is relatively unimportant. Feedback is vitally important.
  22. 22. Both/And? • Knowledge creation; generating generalizable lessons Learning Accountability • Reporting; ensuring compliance with plans, standards, or contracts Performance Improvement; Increased Development Effectiveness
  23. 23. Programs Should Be Held Accountable For Asking difficult questions Maintaining a focus on outcome Identifying limitations, problems, and successes Taking risks rather than "playing it safe" Actively seeking evaluation and feedback Actively challenging assumptions Identifying shortcomings and how they might be rectified Effectively planning and managing based on monitoring data Acting on findings from evaluation Generating learning that can be used by others
  24. 24. What is Feedback? Evaluation feedback is a dynamic process that involves the presentation and dissemination of evaluation information in order to ensure its application into new or existing projects. Feedback, as distinct from dissemination of evaluation findings, is the process of ensuring that lessons learned are incorporated into new operations.
  25. 25. Actions to Improve the Use of Evaluation Feedback Understand how learning happens within and outside an organization Identify obstacles to learning and overcome them Assess how the relevance and timeliness of evaluation feedback can be improved Tailor feedback to the needs of different audiences Involve stakeholders in the design and implementation of evaluations and the use of feedback results
  26. 26. Who Can Learn from Evaluation? The wider community People who are or will be planning, managing, or executing similar projects in the future The people who contribute to the evaluation (including direct stakeholders) The people who conduct the evaluation The people who commission the evaluation The beneficiaries who are affected by the work being evaluated The people whose work is being evaluated (including implementing agencies)
  27. 27. Why We Need a Learning Approach to Evaluation Learning should be at the core of every organization to enable adaptability and resilience in the face of change. Evaluation provides unique opportunities to learn throughout the management cycle of a project. To reap these opportunities, evaluation must be designed, conducted, and followed-up with learning in mind.
  28. 28. How Can Stakeholders Contribute to Learning from Evaluation? Help design the terms of reference for the evaluation Be involved in the evaluation process as part of the evaluation team or reference group or as a source of information) Discuss and respond to the analyses and findings Discuss and respond to recommendations Use findings to influence future practice or policy Review the evaluation process
  29. 29. What is a "Lesson"? Lessons learned are findings and conclusions that can be generalized beyond the evaluated project. In formulating lessons, the evaluators are expected to examine the project in a wider perspective and put it in relation to current ideas about good and bad practice.
  30. 30. What is Needed to Learn a "Lesson"? Identify: was there a difference between what was Reflect: planned and what happened? what actually happened? Analyze: why was there a difference and what were its root causes? Generalize: what can be learned from this and what could be done in the future to avoid the problem or repeat the success? Triangulate: what other sources confirm the lesson? At this point, we have a lesson identified but not yet learned: to truly learn a lesson one must take action.
  31. 31. What Influences Whether a Lesson is Learned? Political Factors Inspired Leadership The Quality of the Lesson Access to the Lesson Conventional Wisdom Chance Vested Interests Risk Aversion Bandwagons Pressure to Spend Bureaucratic Inertia
  32. 32. Quality Standards for Evaluation Use and Learning The evaluation is designed, conducted, and reported to meet the needs of its intended users. Conclusions, recommendatio ns, and lessons are clear, relevant, targeted, and actionable so that the evaluation can be used to achieve its intended accountability and learning objectives. The evaluation is delivered in time to ensure optimal use of results. Systematic storage, dissemination, and management of the evaluation report is ensured to provide easy access to all partners, reach target audiences, and maximize the benefits of the evaluation.
  33. 33. Monitoring and Evaluation Systems as Institutionalized Learning Learning must be incorporated into the overall management cycle of a project through an effective feedback system. Learning is also a key tool for management and, as such, the strategy for the application of evaluative knowledge is an important means of advancing towards outcomes. Information must be disseminated and available to potential users in order to become applied knowledge.
  34. 34. A Learning Approach to Evaluation In development assistance, the overarching goal for evaluation is to foster a transparent, inquisitive, and self-critical organization culture across the whole international development community so we can all learn to do better.
  35. 35. Eight Challenges Facing Learning-Oriented Evaluations The inflexibility of logic models The demands for accountability and impact The constraints created by rigid reporting frameworks The constraints of quantitative indicators Involving stakeholders Learning considered as a knowledge commodity Underinvestment in evaluation Underinvestment in the architecture of knowledge management and learning
  36. 36. Focus of the Terms of Reference for an Evaluation Evaluation Purpose Project Background Stakeholder Involvement Evaluation Questions Findings, Conclusions, and Recommendations Methodology Work Plan and Schedule Deliverables
  37. 37. Building Learning into the Terms of Reference for an Evaluation Make the drafting of the terms of reference a participatory activity—involve stakeholders if you can Consider the utilization of the evaluation from the outset—who else might benefit from it? Spend time getting the evaluation questions clear and include questions about unintended outcomes Ensure that the "deliverables" include learning points aimed at a wide audience Build in diverse reporting and dissemination methods for a range of audiences Ensure there is followup by assigning responsibilities for implementing recommendations Build in a review of the evaluation process
  38. 38. Why Questions Are the Heart of Evaluation for Learning Learning is best stimulated by seeking answers to questions Questions cut through bureaucracy and provide a meaningful focus for evaluation Seeking answers to questions can motivate and energize Questions make it easier to design the evaluation: what data to gather, how, and from whom? Answers to questions can provide a structure for findings, conclusions, and recommend ations
  39. 39. Criteria for Useful Evaluation Questions Data can be used to answer each question There is more than one possible answer to each question: each question is open and its wording does not pre-determine the answer The primary intended users want to answer the questions: they care about the answers The primary users want to answer the questions for themselves, not just for someone else The intended users have ideas about how they would use the answers to the questions: they can specify the relevance of the answers for future action
  40. 40. Utilization-Focused Evaluation Utilization-focused evaluation is done for and with specific intended primary users for specific intended uses. It begins with the premise that evaluations should be judged by their utility and actual use. It concerns how real people in the real world apply evaluation findings and experience the evaluation process. Therefore, the focus in utilization-focused evaluation is intended use by intended users.
  41. 41. The Stages of Utilization-Focused Evaluation 1. Identify primary intended users 4. Analyze and interpret findings, reach conclusions, and make recommendations 2. Gain commitment and focus the evaluation 3. Decide on evaluation methods 5. Disseminate evaluation findings
  42. 42. Potential Evaluation Audiences Program Staff Media Program Managers Policy Makers Board Members Program Funders Program Clients NGOs Researchers Other Agencies
  43. 43. Target Audiences for Evaluation Feedback
  44. 44. Typology of Evaluation Use
  45. 45. Conceptual Use of Evaluation Genuine Learning • Conceptual use is about generating knowledge in and understanding of a given area. Then, people think about the project in a different way. • Over time and given changes in the contextual and political circumstances surrounding the project, conceptual use can lead to significant changes.
  46. 46. Instrumental Use of Evaluation Practical Application • The evaluation directly affects decision-making and influences changes in the program under review. • Evidence for this type of utilization involves decisions and actions that arise from the evaluation, including the implementation of recommendations.
  47. 47. Process Use of Evaluation Learning by Doing • Process use concerns how individuals and organizations are impacted as a result of participating in an evaluation. Being involved in an evaluation may lead to changes in the thoughts and behaviors of individuals which may then lead to beneficial cultural and organizational change. • Types of use that precede lessons learned include learning to learn, creating shared understanding, developing networks, strengthening projects, and boosting morale.
  48. 48. Symbolic Use of Evaluation Purposeful Non-Learning • Symbolic use means that evaluations are undertaken to signify the purported rationality of the agency in question. Hence one can claim that good management practices are in place.
  49. 49. Political Use of Evaluation Learning is Irrelevant • Evaluation occurs after key decisions have been taken. The evaluation is then used to justify the pre-existing position, e.g., budget cuts to a program.
  50. 50. Factors That Affect Utilization Relevance of the findings, conclusions, and recommendations Credibility of the evaluators Quality of the analysis Actual findings The evaluator's communication practices Timeliness of reporting The attitudes of key individuals towards the evaluation The organizational climate, e.g., decision-making, political, and financial The organizational setting
  51. 51. Obstacles to Learning from Evaluation Individual Defense Mechanisms • Immediate personal reaction to feedback that threatens us tends to be defensive. In addition, we tend to resist evidence that does not accord with our own world views. • It takes a conscious effort to actively seek feedback and hear evidence that may be negative, or may not fit with our own world view. It is hard to treat discordant information as something that may help us to improve, and to navigate that information with curiosity rather than suspicion.
  52. 52. Obstacles to Learning from Evaluation Organizational Dilemmas • Do organizational culture, structures, policies, or procedures help or hinder learning? Do design processes build ownership and accountability from the outset? Do reporting and review procedures foster honesty and trust? Do procedures allow for flexibility and change? Do staff collectively share their experiences and insights about what works? Are there incentives for learning? Are there time and resource constraints?
  53. 53. Enhancing Learning from Evaluation For Individual Evaluations • Select topics that are relevant to your intended audiences and their timeframes. • Proactively plan for use from the start: this means intended use by intended users. Identify supportive and influential individuals who want evaluative information. • Vigorously engage intended users throughout the evaluation process, e.g., by means of advisory committees, help with forming recommendations, data analysis. Learning is an active, not passive, process.
  54. 54. Enhancing Learning from Evaluation For Individual Evaluations • The evaluation needs to be credible in the eyes of users and of high quality. If your findings are controversial your evidence needs to be of an even higher standard. • Timely reporting supports immediate use although research suggests evaluations have a useful life of 8–10 years. • Reporting of results needs to make use of multiple formats, e.g., written, verbal, and/or visual, while presenting 3–5 key messages in a user-friendly format.
  55. 55. Enhancing Learning from Evaluation For Individual Evaluations • Good recommendations are technically, politically, administratively, legally, and financially viable. • Evaluation findings must be assertively disseminated in a manner that supports audience engagement. Your evaluation findings are competing against other sources of information. • Think about learning and use from a change management perspective. Resistance to change is to be expected. Potential users benefit from support: technical; emotional; financial; practical help and advice; etc.
  56. 56. Enhancing Learning from Evaluation Institutionalizing Monitoring and Evaluation Systems • To enhance the prospects of learning and use, there is a need to link evaluation into mainstream processes such as policy making, planning, budgeting, accountability and reporting, managing for results, and organizational incentives
  57. 57. Monitoring and Evaluation: Conventional and Narrative Conventional • Deductive—about expected outcomes • Indicators often determined by senior staff • Closed or specific questioning • Analysis by management • Based on numbers—no context • About "proving" • Central tendencies Narrative • Inductive—about unexpected outcomes • Diversity of views (from staff and beneficiaries) • Open questioning • Participatory analysis • Contextual—"rich description" • About learning and improvement • Outer edges of experience
  58. 58. What is Required of Today's Evaluations Involve Stakeholders Build Evaluation Capacity Design Evaluations to Enhance Use Show Cultural Competence Focus on Performance Improvement Demonstrate Transparency
  59. 59. Why Use a Narrative (Story-Based) Approach? Stories can convey difficult messages People remember stories Stories provide a "rich picture" to decisionmakers People tell stories naturally Stories start with the lived experience of beneficiaries Storytelling Stories provide a basis for discussion
  60. 60. What is Most Significant Change? The Most Significant Change technique is a qualitative and participatory form of monitoring and evaluation based on the collection and systematic selection of stories of reported changes from projects. The Most Significant Change technique does not use predesigned indicators but allows these to emerge from the stories told by those affected by the project.
  61. 61. The Most Significant Change Cycle Project Changes in Peoples' Lives Learning Action Stories Improved Project
  62. 62. The Core of the Most Significant Change Technique A question: "In your opinion, what was the most significant change that took place in … over the … months?" [Describe the change and explain why you think it is significant.] Re-iteration of the same kind of question: "Decide which of the change stories collected describes the ‘most significant’ changes experienced by the respondents.” [Describe the change and tell why you think it is significant.]
  63. 63. What Makes Most Significant Change Different Those participating have a choice about what sort of information to collect. The advantages of the Most Significant Change technique, compared to conventional approaches to monitoring and evaluation, are that The technique uses diverse rather than standard data. Information is analyzed by all participants, not simply by a central unit. Subjectivity is used rather than avoided.
  64. 64. The 10 Steps of the Most Significant Change Technique 1. Get started, establish champions, and familiarize 8. Quantify 2. Determine domains of change 7. Verify stories 3. Define the reporting period 6. Discuss and communicate the results of the selection with stakeholders 4. Collect stories 5. Review and filter stories regularly 9. Conduct secondary analysis and metamonitoring 10. Revise the process
  65. 65. Selecting Significant Change Stories Staff read through and identify the most significant of all the submitted significant change stories. Selection criteria emerge through discussion of stories— these criteria are noted. Staff document (i) what significant change was selected, (ii) why it was selected, (iii) the process used to make the selection, and (iv) who was involved. Subjectivity is made accountable through transparency.
  66. 66. How to Use the Most Significant Change Technique Not as a stand-alone method To identify unexpected changes Alongside indicatorbased systems To involve a wide range of people To engage people in analysis of change To focus on outcomes rather than outputs
  67. 67. The Conventional Problem-Focused Approach to Evaluation Identify the issues or problems Determine the root causes Generate solutions Develop action plans Implement action plans
  68. 68. Problem-Focused Approach— Assumptions • There is some ideal way for things to be (usually determined by the logic model). • If a situation is not as we would like it to be, it is a "problem" to be solved. • Deviations from the plan (logic model) are automatically seen as problems. • The way to solve a problem is to break it into parts and analyze it. • If we find the broken part and fix it, the whole problem will be solved.
  69. 69. Unintended Consequences of Problem-Focused Approaches • Fragmented responses—lack of holistic overview • Necessary adaptations to plans viewed negatively • Focus on single-loop learning—lack of creativity and innovation; untested assumptions • Reinforces negative vocabulary—drains energy; leads to hopelessness and wish to simply get work completed • Reinforces "blame culture"—undermines trust; increases risk aversion; strains relationships
  70. 70. Appreciative Inquiry Ap-pre'ci-ate, v. • Recognize the quality, significance or magnitude of • Recognizing the best in someone or something • To be fully aware of or sensitive to • To raise in value or price In-quire', v. • The act of exploration and discovery • The process of gathering information for the purpose of learning and changing • A close examination in a quest for truth
  71. 71. What is Appreciate Inquiry? • Appreciative inquiry builds on learning from what is working well rather than focusing on "fixing" problems. • Appreciative inquiry brings positive experiences and successes to everyone's awareness. • Appreciative inquiry uses a process of collaborative inquiry that collects and celebrates good news stories. • Stories that emanate from appreciative inquiry generate knowledge that strengthens the identity, spirit, and vision of the team involved in the project and helps everyone learn how to better guide its development.
  72. 72. Appreciative Inquiry and Evaluation • Appreciative inquiry helps identify and value what is working well in a project and builds on these good practices. • Appreciative inquiry is better suited to formative evaluation or monitoring than to summative evaluation. • Appreciative inquiry can be used to guide questions during development of the terms of reference for an evaluation and at data collection stages.
  73. 73. What to grow What to fix New grammar of the true, good, better, possible Underlying grammar = problem, symptoms, causes, solutions, action plan, intervention "Problem focus" implies that there is an ideal Creates new energy fast Breaks things into pieces and specialties, guaranteeing fragmented responses Assumes organizations are sources of capacity and imagination Slow! It takes a lot of positive emotion to make real change Expands vision of preferred future Assumes organizations are constellations of problems to be overcome Problem Focus Appreciative Inquiry Comparing Appreciative Inquiry with Problem-Focused Approaches
  74. 74. The Appreciative Inquiry Process —The 5-Ds or 5-Is 1. Definition: Frame the inquiry (Initiate) 5. Destiny: How to make what should be happen? (Implement) 4. Design: What should be? (Innovate) 2. Discovery: What is good? What has worked? (Inquire) 3. Dream: What might be? (Imagine)
  75. 75. Example Starter Questions for Appreciative Inquiry • Think back on your time with this project. Describe a high point or exceptional experience that demonstrates what the project has been able to achieve. • Describe a time when this project has been at its best—when people were proud to be a part of it. What happened? What made it possible for this highpoint to occur? What would things look like if that example of excellence was the norm? Good appreciative inquiry questions should illuminate in turn the five dimensions the technique addresses.
  76. 76. Appreciative Inquiry Can Enrich Evaluation When … The organization is interested in using participatory and collaborative evaluation approaches The evaluation is happening part way through a project (formative) The evaluation includes a wide range of stakeholders with differing views of "success" The organization is genuinely interested in learning from unintended as well as intended outcomes The organization wants to use evaluation findings to guide change There is a desire to build evaluation capacity
  77. 77. The Nature of Development Complex: involves a mix of actors and factors Indeterminate: independent of the duration of a project Non-Linear: unexpected, emergent, discontinuous Two-Way: results may change the project Beyond Control: but subject to influence Incremental, Cumulative: watersheds and tipping points
  78. 78. Challenges in Evaluating Development Interventions Establishing cause and effect in open systems Measuring what did not happen Reporting on emerging objectives Justifying continuing "successful" projects Timing—when to evaluate Encouraging iterative learning among partners Clarifying values Working in "insecure" situations
  79. 79. A Critical Look at Logic Models Lack of flexibility Clarify objectives and how they will be achieved Make explicit the assumptions about cause and effect Identify potential risks Establish how progress will be monitored Lack of attention to relationships Problem-focused approach to planning Insufficient attention to outcomes Oversimplifies monitoring and evaluation Inappropriate at program and organizational levels
  80. 80. Outcome Mapping Outcome mapping is an approach to planning, monitoring, and evaluating social change initiatives. Outcome mapping uses a set of tools and guidelines that steer teams through a process to identify their project's desired changes and to work collaboratively to bring about those changes. Results are measured by changes in the relationships, behaviors, and actions of the individuals, groups, and organizations the project is working directly with and seeking to influence.
  81. 81. Outcome Mapping Can Help … Understand and influence human well-being Plan and measure social change Foster social and organizational learning Identify parties with whom one might work directly to influence behavioral change Bring stakeholders into the monitoring and evaluation process Strengthen partnerships and alliances Plan and monitor behavioral change Monitor the internal practices of the project Design an evaluation plan
  82. 82. The Three Key Concepts of Outcome Mapping Sphere of Influence Boundary Partners Outcomes Understood as Changes in Behavior Development is about people—it is about how they relate to one another and their environment, and how they learn in doing so. Outcome mapping puts people and learning first and accepts unexpected change as a source of innovation. It shifts the focus from changes in state, viz. reduced poverty, to changes in behaviors, relationships, actions, and activities. —Olivier Serrat
  83. 83. There is a Limit to Our Influence Project Sphere of Control Partners Sphere of Influence Beneficiaries Sphere of Interest
  84. 84. There is a Limit to Our Influence Inputs Sphere of Control Outputs Outcomes Sphere of Influence Impact Sphere of Interest
  85. 85. Focus of Outcome Mapping Inputs Activities Outputs Outcome Outcome Mapping Impact
  86. 86. Boundary Partners Project Beneficiaries Stakeholders Boundary Partners
  87. 87. Boundary Partner Example Farmers participate in field trials Participatory research on demonstration plots to reduce use of chemicals and introduce yam cultivation Participating farmers learn how to identify and use non-cultivated food sources and how to grow yams Extension workers visit demonstration farms Increased knowledge of traditional practices through food festivals and other social marketing Other farmers use noncultivated food sources and grow yams Training of extension workers Documentation of effective farming practices Less dependency on market for food sources Greater food security Extension workers promote use of noncultivated food sources, natural fertilizers, and adoption of yam cultivation Improved health and reduced poverty
  88. 88. The Problem with Impact Impact Implies … The Reality is … • Cause and effect • Positive, intended results • Open system • Unexpected positive and negative results occur • Upstream effects are important • Multiple actors create results and deserve credit • Change process never ends • Focus on ultimate effects • Credit goes to a single contributor • Story ends when project obtains success
  89. 89. The Principles of Outcome Mapping Continuous Learning and Flexibility Actor-Centered Development and Behavioral Change Participation and Accountability Non-Linearity and Contribution (not attribution and control)
  90. 90. Three Stages of Outcome Mapping Intentional Design • Step 1: Vision • Step 2: Mission • Step 3: Boundary Partners • Step 4: Outcome Challenges • Step 5: Progress Markers • Step 6: Strategy Maps • Step 7: Organizational Practices Evaluation Planning • Step 12: Evaluation Plan Outcome and Performance Monitoring • Step 8: Monitoring Priorities • Step 9: Outcome Journals • Step 10: Strategy Journal • Step 11: Performance Journal
  91. 91. When Does Outcome Mapping Work Best? When working in partnership When building capacity When a deeper understanding of social factors is critical When promoting knowledge and influencing policy When tackling complex problems To embed reflection and dialogue
  92. 92. Tips for Introducing Outcome Mapping Focus on timing Use it flexibly Use it to manage shifts in organizatio nal culture Foster capacities and mindsets Use it to encourage collaboration
  93. 93. Learning and Project Failure Stage Preparation Category Failures of intelligence: not knowing enough at the early stages of project formulation, resulting in crucial aspects of the project’s context being ignored. Failures of decision making: drawing false conclusions or making wrong choices from the data that are available, and underestimating the importance of key pieces of information. Implementation Failures of implementation: bad or inadequate management of one or more important aspects of the project. Failures of reaction: inability or unwillingness to modify the project in response to new information or changes in conditions that come to light as the project proceeds. Evaluation Failures of evaluation: not paying enough attention to the results. Failures of learning: not transferring the lessons into future plans and procedures.
  94. 94. Competencies for Knowledge Management and Learning Strategy Development A strategy is a long-term plan of action to achieve a particular goal. Management Techniques Knowledge is a resource. It needs to be managed effectively, just like other resources such as financial and human resources. Collaboration Mechanisms When working with others, efforts sometimes turn out to be less than the sum of the parts. Too often, not enough attention is paid to facilitating effective collaborative practices. Knowledge Sharing and Learning Two-way communication that takes place simply and effectively builds knowledge. Knowledge Capture and Storage Organizational memory is a key part of any knowledge management system. The knowledge needs to be retrievable to be useful.
  95. 95. Knowledge Solutions for Knowledge Management and Learning Strategy Development • Behavior and change; emergence and scenario thinking; institutional capacity and participation; knowledge assets; marketing; organizational learning; partnerships and networks of practice Management Techniques • Branding and value; complexity and lateral thinking; linear thinking; organizational change; talent management Collaboration Mechanisms • Collaborative tools; communities of practice and learning alliances; leadership; social innovations; teamwork Knowledge Sharing and Learning • Creativity, innovation, and learning; learning and development; learning lessons; dissemination Knowledge Capture and Storage • Knowledge harvesting; reporting; technology platforms www.adb.org/site/knowledge-management/knowledge-solutions
  96. 96. Developing Evaluation Capacity Capacity is the ability of people, organizations, and society to manage their affairs successfully. Capacity to undertake effective monitoring and evaluation is a determining factor of development effectiveness. Evaluation capacity development is the process of reinforcing or establishing the skills, resources, structures, and commitment to conduct and use monitoring and evaluation over time.
  97. 97. Why Develop Evaluation Capacity? Stronger evaluation capacity will help development agencies • Develop as a learning organization. • Take ownership of their visions for poverty reduction, if the evaluation vision is aligned with that. • Profit more effectively from formal evaluations. • Make self-evaluations an important part of their activities. • Focus on quality improvement efforts. • Increase the benefits and decrease the costs associated with their operations. • Augment their ability to change programming midstream and adapt in a dynamic, unpredictable environment. • Build evaluation equity, if they are then better able to conduct more of their own self-evaluation, instead of hiring them out. • Shorten the learning cycle.
  98. 98. Using Knowledge Management for Evaluation Evaluation findings only add value when they are used, so: • Make evaluation findings available when needed by decision makers, in a user-friendly format, e.g., a searchable lessons database system. • Invest in knowledge management architecture. • Make evaluation findings available in a range of knowledge products, including web-based. • Encourage collaboration between evaluation specialists and knowledge management specialists. • Improve targeted dissemination of evaluation findings.
  99. 99. How to Share Findings from Evaluations To increase the chances evaluation findings will be used they must be shared widely, so: • Upload to public websites. • Hold meetings with interested stakeholders. • Incorporate findings into existing publications, e.g., annual reports, newsletters. • Present findings and learning points at annual meetings. • Publish an article for a journal. • Present a paper at a conference or seminar. • Invite local researchers and academics to discuss evaluation findings. • Share findings and learning points through workshops and training. • Share lessons through knowledge networks and communities of practice.
  100. 100. Characteristics of a Good Knowledge Product A good knowledge product is • • • • • • • Related to what users want Designed for a specific audience Relevant to decision-making needs Timely Written in clear and easily understandable language Based on evaluation information without bias If possible, developed through a participatory process and validated through a quality assurance process with relevant stakeholders • Easily accessible to target audience • Consistent with what other products enhance visibility and learning
  101. 101. Further Reading • ADB. 2008. Output Accomplishment and the Design and Monitoring Framework. Manila. Available: www.adb.org/publications/output-accomplishment-anddesign-and-monitoring-framework • ADB. 2008. Focusing on Project Metrics. Manila. Available: www.adb.org/publications/focusing-project-metrics • ADB. 2008. Outcome Mapping. Manila. Available: www.adb.org/publications/outcome-mapping • ADB. 2008. The Reframing Matrix. Manila. Available: www.adb.org/publications/reframing-matrix • ADB. 2008. Appreciative Inquiry. Manila. Available: www.adb.org/publications/appreciative-inquiry
  102. 102. Further Reading • ADB. 2009. The Most Significant Change Technique. Manila. Available: www.adb.org/publications/most-significantchange-technique • ADB. 2009. Monthly Progress Notes. Manila. Available: www.adb.org/publications/monthly-progress-notes • ADB. 2009. Learning from Evaluation. Manila. Available: www.adb.org/publications/learning-evaluation • ADB. 2009. Asking Effective Questions. Manila. Available: www.adb.org/publications/learning-evaluation • ADB. 2010. Embracing Failure. Manila. Available: www.adb.org/publications/embracing-failure
  103. 103. Further Reading • ADB. 2010. Harvesting Knowledge. Manila. Available: www.adb.org/publications/harvesting-knowledge • ADB. 2010. The Perils of Performance Measurement. Manila. Available: www.adb.org/publications/perils-performancemeasurement • ADB. 2011. Learning Histories. Manila. Available: www.adb.org/publications/learning-histories • Anne Acosta and Boru Douthwaite. 2005. Appreciative Inquiry: An Approach for Learning and Change Based on Our Own Best Practices. ILAC Brief 6. • Ollie Bakewell. 2003. Sharpening the Development Process. INTRAC.
  104. 104. Further Reading • ADB. 2010. Harvesting Knowledge. Manila. Available: www.adb.org/publications/harvesting-knowledge • ADB. 2010. The Perils of Performance Measurement. Manila. Available: www.adb.org/publications/perils-performancemeasurement • ADB. 2011. Learning Histories. Manila. Available: www.adb.org/publications/learning-histories • Anne Acosta and Boru Douthwaite. 2005. Appreciative Inquiry: An Approach for Learning and Change Based on Our Own Best Practices. ILAC Brief 6. • Ollie Bakewell. 2003. Sharpening the Development Process. INTRAC.
  105. 105. Further Reading • Scott Bayley. 2008. Maximizing the Use of Evaluation Findings. Manila. • Rick Davies and Jess Dart. 2004. The Most Significant Change (MSC) Technique: A Guide to Its Use. Monitoring and Evaluation News. • Paul Engel and Charlotte Carlsson. 2002. Enhancing Learning through Evaluation. ECDPM. • Lucy Earle. 2003. Lost in the Matrix: The Logframe and the Local Picture. INTRAC. • Paul Engel, Charlotte Carlsson, and Arin Van Zee. 2003. Making Evaluation Results Count: Internalizing Evidence by Learning. ECDPM.
  106. 106. Further Reading • Ollie Bakewell and Anne Garbutt. 2005. The Use and Abuse of the Logical Framework Approach. Sida. • Stephen Gill. 2009. Developing a Learning Culture in Nonprofit Organizations. Sage • Harry Jones and Simon Hearn. 2009. Outcome Mapping: A Realistic Alternative for Planning, Monitoring, and Evaluation. Overseas Development Institute. • OECD. 2001. Evaluation Feedback for Effective Learning and Accountability. • OECD. 2010. DAC Quality Standards for Development Evaluation.
  107. 107. Further Reading • OECD. Undated. Evaluating Development Co-Operation: Summary of Key Norms and Standards. • Michael Quinn Patton. 2008. Utilization Focused Evaluation. Sage. • Michael Quinn Patton and Douglas Horton. 2009. UtilizationFocused Evaluation for Agricultural Innovation. CGIAR-ILAC. • Burt Perrin. 2007. Towards a New View of Accountability. In Marie-Louise Bemelmans-Vide, Jeremy Lonsdale, and Burt Perrin (eds.). 2007. Making Accountability Work: Dilemmas for Evaluation and for Audit. Transaction Publishers. • Hallie Preskill and Rosalie Torres. 1999. Evaluative Inquiry for Learning in Organizations. Sage.
  108. 108. Further Reading • Sida. 2004. Looking Back, Moving Forward. Sida Evaluation Manual. Sida. • UNDP. 2009. Handbook on Planning Monitoring and Evaluation for Development Results. • Rob Vincent and Ailish Byrne. 2006. Enhancing Learning in Development Partnerships. Development in Practice. Vol. 16, No. 5, pp. 385-399. • Eric Vogt, Juanita Brown, and David Isaacs. 2003. The Art of Powerful Questions: Catalyzing Insight, Innovation, and Action. Whole Systems Associates.
  109. 109. Further Reading • Jim Woodhill. 2005. M&E as Learning: Re-Thinking the Dominant Paradigm. In Monitoring and Evaluation of Soil Conservation and Watershed Development Projects. World Association of Soil And Water Conservation. • World Bank. 2004. Ten Steps to a Results-Based Monitoring and Evaluation System.
  110. 110. Videos • Jess Dart. Most Significant Change, Part 1. Available: www.youtube.com/watch?v=H32FTygl-Zs&feature=related • Jess Dart. Most Significant Change, Part 2. Available: www.youtube.com/watch?v=b-wpBoVPkc0&feature=related • Jess Dart. Most Significant Change, Part 3. Available: www.youtube.com/watch?v=PazXICHBDDc&feature=related • Jess Dart. Most Significant Change Part 4. Available: www.youtube.com/watch?v=8DmMXiJr1iw&feature=related • Jess Dart. Most Significant Change Part 5 (Q&A). Available: www.youtube.com/watch?v=JuaGmstG8Kc&feature=related • Sarah Earl. Introduction to Outcome Mapping, Part 1. Available: www.youtube.com/watch?v=fPL_KEUawnc
  111. 111. Videos • Sarah Earl. Introduction to Outcome Mapping, Part 2. Available: www.youtube.com/watch?v=a9jmD-mC2lQ&NR=1 • Sarah Earl. Introduction to Outcome Mapping, Part 3. Available: www.youtube.com/watch?v=ulXcE455pj4&feature=related • Sarah Earl. Utilization-Focused Evaluation. Available: www.youtube.com/watch?v=KY4krwHTWPU&feature=related
  112. 112. Bruce Britton Organizational Learning Specialist Framework bruce@framework.org.uk www.framework.org.uk Olivier Serrat Principal Knowledge Management Specialist Regional and Sustainable Development Department Asian Development Bank knowledge@adb.org www.adb.org/knowledge-management www.facebook.com/adbknowledgesolutions www.scribd.com/knowledge_solutions www.twitter.com/adbknowledge
  1. A particular slide catching your eye?

    Clipping is a handy way to collect important slides you want to go back to later.

×