This document discusses the importance of monitoring and evaluation (M&E) for programs and projects. It defines monitoring as an ongoing process of collecting and analyzing data to track progress and make adjustments, while evaluation assesses relevance, effectiveness, impact and sustainability. The key aspects of building an M&E system are agreeing on outcomes to measure, selecting indicators, gathering baseline data, setting targets, monitoring implementation and results, reporting findings, and sustaining the system long-term. A strong M&E system provides evidence of achievements and challenges, enables learning and improvement, and helps ensure resources are allocated to effective programs.
This presentation explains the difference between Monitoring and Evaluation; the types of M&E frameworks; steps in logical framework and its difference from theory of change.
This presentation explains the difference between Monitoring and Evaluation; the types of M&E frameworks; steps in logical framework and its difference from theory of change.
A simple presentation about Monitoring and Evaluation prepared by Jubair Ahmad Musazay for interns from Kabul University who are undergoing their internship program in General Directorate of Policy, Monitoring and Evaluation of Afghanistan National Development Strategy (ANDS), in Ministry of Economy of Islamic Republic of Afghanistan.
Uploaded in Slideshare for the purpose of sharing and spreading knowledge.
A series of modules on project cycle, planning and the logical framework, aimed at team leaders of international NGOs in developing countries.
Part 7 of 11.
There are two handouts to go with this module, Population Indicators, and a Logframe with blanks. http://www.slideshare.net/Makewa/population-indicators-handout and http://www.slideshare.net/Makewa/exercise-watsan-logframe-with-blanks
6 M&E - Monitoring and Evaluation of Aid ProjectsTony
A series of course modules on project cycle, planning and the logical framework, aimed at team leaders of international NGOs in developing countries.
This is part 6 of 11, beginning with 2 modules on leadership and conflict resolution, then 9 modules on project cycle management.
This module has 3 handouts and presenter notes as separate documents.
Sample Proposal: http://www.slideshare.net/Makewa/6-watsan-training-sample-proposal-09
Slides as a handout: http://www.slideshare.net/Makewa/6-me-handout
Presenter notes: http://www.slideshare.net/Makewa/6-module-6-presenter-notes
Monitoring is the continuous collection of data and information on specified indicators to assess the implementation of a development intervention in relation to activity schedules and expenditure of allocated funds, and progress and achievements in relation to its intended outcome.
Evaluation is the periodic assessment of the design implementation, outcome, and impact of a development intervention. It should assess the relevance and achievement of the intended outcome, and implementation performance in terms of effectiveness and efficiency, and the nature, distribution, and sustainability of impact.
Presentation Training on Result Based Management (RBM) for M&E StaffFida Karim 🇵🇰
Planning, Monitoring, Evaluation & Reporting together for developmental results: Results-based Management-RBM (RBM)?
Logical Framework Approach (LFA)
Planning for results
Monitoring for results
Evaluating for results
Enhancing the use of knowledge from monitoring and evaluation
A simple presentation about Monitoring and Evaluation prepared by Jubair Ahmad Musazay for interns from Kabul University who are undergoing their internship program in General Directorate of Policy, Monitoring and Evaluation of Afghanistan National Development Strategy (ANDS), in Ministry of Economy of Islamic Republic of Afghanistan.
Uploaded in Slideshare for the purpose of sharing and spreading knowledge.
A series of modules on project cycle, planning and the logical framework, aimed at team leaders of international NGOs in developing countries.
Part 7 of 11.
There are two handouts to go with this module, Population Indicators, and a Logframe with blanks. http://www.slideshare.net/Makewa/population-indicators-handout and http://www.slideshare.net/Makewa/exercise-watsan-logframe-with-blanks
6 M&E - Monitoring and Evaluation of Aid ProjectsTony
A series of course modules on project cycle, planning and the logical framework, aimed at team leaders of international NGOs in developing countries.
This is part 6 of 11, beginning with 2 modules on leadership and conflict resolution, then 9 modules on project cycle management.
This module has 3 handouts and presenter notes as separate documents.
Sample Proposal: http://www.slideshare.net/Makewa/6-watsan-training-sample-proposal-09
Slides as a handout: http://www.slideshare.net/Makewa/6-me-handout
Presenter notes: http://www.slideshare.net/Makewa/6-module-6-presenter-notes
Monitoring is the continuous collection of data and information on specified indicators to assess the implementation of a development intervention in relation to activity schedules and expenditure of allocated funds, and progress and achievements in relation to its intended outcome.
Evaluation is the periodic assessment of the design implementation, outcome, and impact of a development intervention. It should assess the relevance and achievement of the intended outcome, and implementation performance in terms of effectiveness and efficiency, and the nature, distribution, and sustainability of impact.
Presentation Training on Result Based Management (RBM) for M&E StaffFida Karim 🇵🇰
Planning, Monitoring, Evaluation & Reporting together for developmental results: Results-based Management-RBM (RBM)?
Logical Framework Approach (LFA)
Planning for results
Monitoring for results
Evaluating for results
Enhancing the use of knowledge from monitoring and evaluation
The Basics of Monitoring, Evaluation and Supervision of Health Services in NepalDeepak Karki
This presentation has made to health workers who have more than two decades of experience of managing/implementing public health programs in Nepal, especially at district level and below.
Information may be time-sensitive. Subscribers should use the information contained at their own risk. Please check latest information with Dr. A by emailing bugdoctor@auburn.edu.
Performance Measurement for Local GovernmentsRavikant Joshi
This PPT was delivered Based on Local Government Financial Management Series- UN-HABITAT in 'Local Government Budgeting and Financial Management Course', December 16 - 20 2008 Khartoum, Sudan
Synthetic Fiber Construction in lab .pptxPavel ( NSTU)
Synthetic fiber production is a fascinating and complex field that blends chemistry, engineering, and environmental science. By understanding these aspects, students can gain a comprehensive view of synthetic fiber production, its impact on society and the environment, and the potential for future innovations. Synthetic fibers play a crucial role in modern society, impacting various aspects of daily life, industry, and the environment. ynthetic fibers are integral to modern life, offering a range of benefits from cost-effectiveness and versatility to innovative applications and performance characteristics. While they pose environmental challenges, ongoing research and development aim to create more sustainable and eco-friendly alternatives. Understanding the importance of synthetic fibers helps in appreciating their role in the economy, industry, and daily life, while also emphasizing the need for sustainable practices and innovation.
Students, digital devices and success - Andreas Schleicher - 27 May 2024..pptxEduSkills OECD
Andreas Schleicher presents at the OECD webinar ‘Digital devices in schools: detrimental distraction or secret to success?’ on 27 May 2024. The presentation was based on findings from PISA 2022 results and the webinar helped launch the PISA in Focus ‘Managing screen time: How to protect and equip students against distraction’ https://www.oecd-ilibrary.org/education/managing-screen-time_7c225af4-en and the OECD Education Policy Perspective ‘Students, digital devices and success’ can be found here - https://oe.cd/il/5yV
How to Split Bills in the Odoo 17 POS ModuleCeline George
Bills have a main role in point of sale procedure. It will help to track sales, handling payments and giving receipts to customers. Bill splitting also has an important role in POS. For example, If some friends come together for dinner and if they want to divide the bill then it is possible by POS bill splitting. This slide will show how to split bills in odoo 17 POS.
Ethnobotany and Ethnopharmacology:
Ethnobotany in herbal drug evaluation,
Impact of Ethnobotany in traditional medicine,
New development in herbals,
Bio-prospecting tools for drug discovery,
Role of Ethnopharmacology in drug evaluation,
Reverse Pharmacology.
This is a presentation by Dada Robert in a Your Skill Boost masterclass organised by the Excellence Foundation for South Sudan (EFSS) on Saturday, the 25th and Sunday, the 26th of May 2024.
He discussed the concept of quality improvement, emphasizing its applicability to various aspects of life, including personal, project, and program improvements. He defined quality as doing the right thing at the right time in the right way to achieve the best possible results and discussed the concept of the "gap" between what we know and what we do, and how this gap represents the areas we need to improve. He explained the scientific approach to quality improvement, which involves systematic performance analysis, testing and learning, and implementing change ideas. He also highlighted the importance of client focus and a team approach to quality improvement.
Operation “Blue Star” is the only event in the history of Independent India where the state went into war with its own people. Even after about 40 years it is not clear if it was culmination of states anger over people of the region, a political game of power or start of dictatorial chapter in the democratic setup.
The people of Punjab felt alienated from main stream due to denial of their just demands during a long democratic struggle since independence. As it happen all over the word, it led to militant struggle with great loss of lives of military, police and civilian personnel. Killing of Indira Gandhi and massacre of innocent Sikhs in Delhi and other India cities was also associated with this movement.
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdfTechSoup
In this webinar you will learn how your organization can access TechSoup's wide variety of product discount and donation programs. From hardware to software, we'll give you a tour of the tools available to help your nonprofit with productivity, collaboration, financial management, donor tracking, security, and more.
The Art Pastor's Guide to Sabbath | Steve ThomasonSteve Thomason
What is the purpose of the Sabbath Law in the Torah. It is interesting to compare how the context of the law shifts from Exodus to Deuteronomy. Who gets to rest, and why?
Instructions for Submissions thorugh G- Classroom.pptxJheel Barad
This presentation provides a briefing on how to upload submissions and documents in Google Classroom. It was prepared as part of an orientation for new Sainik School in-service teacher trainees. As a training officer, my goal is to ensure that you are comfortable and proficient with this essential tool for managing assignments and fostering student engagement.
How to Make a Field invisible in Odoo 17Celine George
It is possible to hide or invisible some fields in odoo. Commonly using “invisible” attribute in the field definition to invisible the fields. This slide will show how to make a field invisible in odoo 17.
Read| The latest issue of The Challenger is here! We are thrilled to announce that our school paper has qualified for the NATIONAL SCHOOLS PRESS CONFERENCE (NSPC) 2024. Thank you for your unwavering support and trust. Dive into the stories that made us stand out!
2. Lecture Overview
Monitoring and Evaluation
How to Build an M&E System
3. What is M, what is E, why and how to monitor
MONITORING & EVALUATION
4. What is Monitoring
Ongoing process that generates information to inform decision
about the program while it is being implemented.
Routine collection and analysis of information to track progress
against set plans and check compliance to established standards
Helps identify trends & patterns, adapts strategies, and inform
decisions
Key words:
• Continuous – ongoing, frequent in nature
• Collecting and analyzing information – to measure progress
towards goals
• Comparing results – assessing the performance of a
program/project
5. Why is Monitoring Important?
Evidence of how much has been or has NOT been achieved
• Quantitative: numbers, percentage
• Qualitative: narrative or observation
Examination of trends
Highlight problems
Early warning signs
Corrective actions
Evaluate effectiveness of management action
Determine achievement of results
6. What is Evaluation
Evaluation is an assessment of an intervention to determine its
relevance, efficiency, effectiveness, impact, and sustainability. The
intention is to provide information that is credible and useful,
enabling incorporation of lessons learned into decision making
processes.
Key Words:
• Assessment – of the value of an event of action
• Relevance
• Efficiency
• Effectiveness
• Impact
• Sustainability
• Lessons learned
8. What is M and what is E?
Monitoring
Measures progress towards
goals, but doesn’t tell us the
extent to which results
achieved or the impact
Continuous, frequent
Has to take place during the
intervention
Evaluation
Measures whether progress
towards goal is caused by
the intervention - causality
Infrequent, time bound
Can evaluate ongoing or
completed intervention
10. Components of Program Evaluation
What are the characteristics of the target population?
What are the risks and opportunities? What
programs are most suitable?
What is the logical chain connecting our program to
the desired results?
Is the program being rolled out as planned? Is their
high uptake among clients? What do they think of it?
What was the impact and the magnitude of the
program?
Given the magnitude of impact and cost, how
efficient is the program?
Needs assessment
Program theory
assessment
Monitoring and
process evaluation
Impact evaluation
Cost effectiveness
Are your questions connected to decision-making?
12. Who is this Evaluation For?
Academics
Donors
• Their Constituents
Politicians / policymakers
Technocrats
Implementers
Proponents, Skeptics
Beneficiaries
13. How can Impact Evaluation Help Us?
Answers the following questions
• What works best, why and when?
• How can we scale up what works?
Surprisingly little hard evidence on what works
Can do more with given budget with better evidence
If people knew money was going to programmes that worked, could
help increase pot for anti-poverty programmes
14. Programs and their Evaluations: Where do we Start?
Intervention
Start with a problem
Verify that the problem actually
exists
Generate a theory of why the
problem exists
Design the program
Think about whether the
solution is cost effective
Program Evaluation
Start with a question
Verify the question hasn’t been
answered
State a hypothesis
Design the evaluation
Determine whether the value
of the answer is worth the cost
of the evaluation
15. Endline
Evaluation
Life Cycle of a Program
Baseline
Evaluation
Change or
improvement
Distributing
reading
materials and
training
volunteers
• Reading
materials delivered
• Volunteers
trained
• Target children
are reached
• Classes are run,
volunteers show up
• Attendance in
classes
• Entire district is
covered
• Refresher training of
teachers
• Tracking the target
children, convincing
parents to send their child
• Incentives to volunteer to
run classes daily and
efficiently (motivation)
• Efforts made for children
to attend regularly
• Improve coverage
Theory of
Change/ Needs
Assessment
Designing
the program
to implement
Background
preparation,
logistics, roll
out of
program
Monitoring
implementati
on
• Process
evaluation
• Progress towards
target
Planning for
continuous
improvement
Reporting
findings -
impact,
process
evaluation
findings
Using the
findings to
improve
program
model and
delivery
16. Program Theory – a Snap Shot
Impacts
Outcomes
Outputs
Activities
Inputs
Results
Implementation
17. With a Focus on measuring both implementation and results?
HOW TO BUILD AN M&E SYSTEM
18. Methods of Monitoring
First hand information
Citizens reporting
Surveys
Formal reports by project/programme staff
• Project status report
• Project schedule chart
• Project financial status report
• Informal Reports
• Graphic presentations
19. Monitoring: Questions
Is the intervention implemented as designed? Does the program
perform?
Inputs Outputs Outcomes
Implementation
Plans and
targets
Is intervention money, staff and other inputs available and put to
use as planned? Are inputs used effectively?
Are the services being delivered as planned?
Is the intervention reaching the right population and target
numbers?
Is the target population satisfied with services? Are they utilizing the
services?
What is the intensity of the treatment?
20. Implementing Monitoring
Develop a monitoring plan
• How should implementation be carried out? What is going to be
changed?
• Are the staff’s incentives aligned with project? Can they be
incentivized to follow the implementation protocol?
• How will you train staff? How will they interact with beneficiaries
or other stakeholders?
• What supplies or tools can you give your staff to make following
the implementation design easier?
• What can you do to monitor? (Field visits, tracking forms,
administrative data, etc.)
• Intensity of monitoring (frequency, resources required,…)?
21. Ten Steps to a Results-based Monitoring and Evaluation
System
Conducting a
readiness
and needs
assessment
Selecting key
indicators to
monitor
outcomes
Planning for
improvement
selecting
realistic targets
Using
evaluation
information
Using findings
1 2 3 4 5 6 7 8 9 10
Agreeing on
outcomes to
monitor and
evaluate
Gathering
baseline data
on indicators
Monitoring for
results
Reporting
findings
Sustaining the
M&E system
within the
organization
22. Conducting a needs and
readiness assessment
1 2 3 4 5 6 7 8 9 10
What are the current systems that exist?
What is the need for the monitoring and evaluation?
Who will benefit from this system?
At what levels will the data be used?
Do we have organization willingness and capacity to establish the M&E
system?
Who has the skills to design and build the M&E system? Who will
manage?
What are the barriers to implementing M&E system on the ground
(resource-crunch)?
How will you fight these barriers?
Will there be pilot programs that can be evaluated within the M&E
system?
- DO WE GO AHEAD?
23. Agreeing on outcomes
(to monitor and evaluate)
1 2 3 4 5 6 7 8 9 10
What are we trying to achieve? What is the vision that our M&E system
will help us achieve?
Are there national or sectoral goals (commitment to achieving the
MDGs)?
Political/donor driven interest in goals?
In other words, what are our Outcomes: Improving coverage, learning
outcomes… broader than focusing on merely inputs and activities
24. Selecting key indicators
to monitor outcomes
1 2 3 4 5 6 7 8 9 10
Identify WHAT needs to get measured so that we know we have
achieved our results?
Avoid broad based results, but assess based on feasibility, time,
cost, relevance
Indicator development is a core activity in building an M&E system
and drives all subsequent data collection, analysis, and reporting
Arriving at indicators will take come time
Identify plans for data collection, analysis, reporting
PILOT! PILOT! PILOT!
25. Gathering baseline data
on indicators
1 2 3 4 5 6 7 8 9 10
Where are we today?
What is the performance of indicators today?
Sources of baseline information: Primary or Secondary data
Date types: Qualitative or Quantitative
Data collection instruments
26. Planning for
improvement selecting
realistic targets
1 2 3 4 5 6 7 8 9 10
Targets – quantifiable levels of the indicators
Sequential, feasible and measurable targets
If we reach our sequential set of targets, then we will reach our
outcomes!
Time bound – Universal enrolment by 2015 (outcome – better economic
opportunities), Every child immunized by 2013 (outcome - reduction in
infant mortality) etc.
Funding and resources available to be taken into account
Target
1
Target
2
Target
3
Outcomes
27. Monitoring for
implementation and
results
1 2 3 4 5 6 7 8 9 10
Impacts
Outcomes
Outputs
Activities
Inputs
Results
Implementation
Results
monitoring
Implementation
monitoring
Change in percentage children
who cannot read; Change in
teacher attendance
Provision of materials; training
of volunteers; usage of
material; number of volunteers
teaching
28. Evaluation(?), Using
Evaluation Information
1 2 3 4 5 6 7 8 9 10
Monitoring does not information on attribution and causality. Information
through Evaluation can be useful to
Helps determine are the right things being done
Helps select competing strategies by comparing results – are there
better ways of doing things?
Helps build consensus on scale-up
Investigate why something did not work – scope for in-depth
analysis
Evaluate the costs relative to benefits and help allocate limited
resources
29. Reporting findings
Sustaining M&E System
1 2 3 4 5 6 7 8 9 10
Using Results
Reporting Findings: What findings are reported to whom, in what format,
and at what intervals. A good M&E system should provide an early warning
system to detect problems or inconsistencies, as well as being a vehicle for
demonstrating the value of an intervention – so do not hide poor results.
Using Results: recognize both internal and external uses of your results
Sustaining the M&E System: Some ways of doing this are generating
demand, assigning responsibilities, increasing capacity, gather ing
trustworthy data.
Evaluation ek assessment hai program ka jo uski
RELEVANCE: upyukta – jo program hai vo country context or logo ke liya kaam aayega? (water is pure, but I give chlorine tablets)
EFFICIENCY: saksham - How economically are resources being used? (cost is more than benefit)
EFFECTIVENESS: kitna prabhav tha – objectives achieved or not?
IMPACT: Long term effects, intended or not intended
SUSTAINABILTY: Nirantarta – program ka benefit continue karega?
First let’s narrow down our definition of Evaluation
Evaluation is a very big term and could mean many things…
In general, we’ll be talking about program evaluation
So that means, not the type of evaluation that’s more administrative in nature…
Performance evaluations, audits, etc…
Unless those are part of a new policy or program that we wish to evaluate…
Programs are still a general term
Could include Policies, or more generally, “interventions”
What distinguishes impact evaluation?
What makes “randomized evaluation” distinct?
Where does monitoring fit in?
The quiz you took at the beginning… that’s part of an evaluation. You’ll take one at the end as well. And you’ll also give us some course feedback
something we’ll look at after this whole course is done and use it to make design and implementation changes to the course
But it’s not something we consider part of the course design itself. It’s not really meant to serve as a pedagogical device.
The clickers. That’s more part of monitoring. It’s specifically designed as part of the pedagogy.
It gives us instant feedback based on which we make mid-course adjustments, corrections, etc.
Part of the pedagogy is that we have a specific decision tree that the implementers (in this case, lecturers) use based on the results of the survey.
Progress: Vikaas Extent: had
Building an evaluation system allows for:
• a more in-depth study of results-based outcomes and
impacts
• bringing in other data sources than just extant
indicators
• addressing factors that are too difficult or expensive to
continuously monitor
• tackling the issue of why and how the trends being
tracked with monitoring data are moving in the
directions they are (perhaps most important).
In general, we’ll be talking about program evaluation
So that means, not the type of evaluation that’s more administrative in nature…
Performance evaluations, audits, etc…
Unless those are part of a new policy or program that we wish to evaluate…
Programs are still a general term
Could include Policies, or more generally, “interventions”
What distinguishes impact evaluation?
What makes “randomized evaluation” distinct?
Where does monitoring fit in?
The quiz you took at the beginning… that’s part of an evaluation. You’ll take one at the end as well. And you’ll also give us some course feedback
something we’ll look at after this whole course is done and use it to make design and implementation changes to the course
But it’s not something we consider part of the course design itself. It’s not really meant to serve as a pedagogical device.
The clickers. That’s more part of monitoring. It’s specifically designed as part of the pedagogy.
It gives us instant feedback based on which we make mid-course adjustments, corrections, etc.
Part of the pedagogy is that we have a specific decision tree that the implementers (in this case, lecturers) use based on the results of the survey.
In general, we’ll be talking about programme evaluation
So that means, not the type of evaluation that’s more administrative in nature…
Performance evaluations, audits, etc…
Unless those are part of a new policy or programme that we wish to evaluate…
Programmes are still a general term
Could include Policies, or more generally, “interventions”
What distinguishes impact evaluation?
What makes “randomized evaluation” distinct?
Where does monitoring fit in?
The quiz you took at the beginning… that’s part of an evaluation. You’ll take one at the end as well. And you’ll also give us some course feedback
something we’ll look at after this whole course is done and use it to make design and implementation changes to the course
But it’s not something we consider part of the course design itself. It’s not really meant to serve as a pedagogical device.
The clickers. That’s more part of monitoring. It’s specifically designed as part of the pedagogy.
It gives us instant feedback based on which we make mid-course adjustments, corrections, etc.
Part of the pedagogy is that we have a specific decision tree that the implementers (in this case, lecturers) use based on the results of the survey.
Who is your audience? And what questions are they asking?
Academics: we have quite a few academics in the audience.
Beneficiaries: This may be slightly different from “who are your stakeholders”?
This effects the type of evaluation you do, but also the questions you seek to answer.
This question is larger than a question of aid
Aid accounts for less than 10% of development spending.
Governments have their own budgets, their own programmes.
Before thinking about evaluations, we should think about what it is that we’re evaluating… Here I’ll generically call it, an “intervention”
You’d be surprised how many policies are implemented that address non-existent problems. One of our evaluations in India is of a policy called continuous and comprehensive evaluation…
Now let’s ask a very specific question…
Ignore blue boxes
If this our program theory – anumaan, parikalpna – then it has two parts. The first is where the program is being implemented and the second is where the results have started to show.
Hum ek M&E system kaise banaye jo program amal karne ko aur uske parinaam dono ko naapta hai
How the world bank says we can do it – handbook for development practitioners
Give examples
Outcome – parinaam
MDG example – eight international development goals that all UN member states have to achieve by the year 2015
Eradicate extreme hunger and poverty
- universal primary education
Indicator: suchak, nideshak
Relevant: uchit
Instruments: jariya
Measure: naap Feasible: sambhav
Another example – I want higher literacy levels
1) 50% children in school; 2) 100 % children in school; 3) teacher attendance increases; 4) children scores improve
Implementation monitoring – data collected on inputs, activities and immediate outputs; information on administrative, implementation, management issues
Results monitoring – sets indicators for outcomes and collects data on outcomes; systematic reporting on progress towards outcomes
Evaluation – karanta sthapit karna, could be expensive but very useful in certain cases