This document discusses monitoring and evaluation (M&E) capacity in Tanzania. It notes that while M&E is important for improving development outcomes, many countries, including Tanzania, lack necessary M&E capacity at both the individual and institutional levels. Comprehensive training is needed to address gaps in M&E skills. The document outlines the differences between monitoring, which tracks project progress, and evaluation, which assesses outcomes and impacts in more depth. Both M&E are important management tools that provide useful feedback when integrated.
Presentation Training on Result Based Management (RBM) for M&E StaffFida Karim 🇵🇰
Planning, Monitoring, Evaluation & Reporting together for developmental results: Results-based Management-RBM (RBM)?
Logical Framework Approach (LFA)
Planning for results
Monitoring for results
Evaluating for results
Enhancing the use of knowledge from monitoring and evaluation
Presentation Training on Result Based Management (RBM) for M&E StaffFida Karim 🇵🇰
Planning, Monitoring, Evaluation & Reporting together for developmental results: Results-based Management-RBM (RBM)?
Logical Framework Approach (LFA)
Planning for results
Monitoring for results
Evaluating for results
Enhancing the use of knowledge from monitoring and evaluation
A series of modules on project cycle, planning and the logical framework, aimed at team leaders of international NGOs in developing countries.
Part 7 of 11.
There are two handouts to go with this module, Population Indicators, and a Logframe with blanks. http://www.slideshare.net/Makewa/population-indicators-handout and http://www.slideshare.net/Makewa/exercise-watsan-logframe-with-blanks
6 M&E - Monitoring and Evaluation of Aid ProjectsTony
A series of course modules on project cycle, planning and the logical framework, aimed at team leaders of international NGOs in developing countries.
This is part 6 of 11, beginning with 2 modules on leadership and conflict resolution, then 9 modules on project cycle management.
This module has 3 handouts and presenter notes as separate documents.
Sample Proposal: http://www.slideshare.net/Makewa/6-watsan-training-sample-proposal-09
Slides as a handout: http://www.slideshare.net/Makewa/6-me-handout
Presenter notes: http://www.slideshare.net/Makewa/6-module-6-presenter-notes
Monitoring is the continuous collection of data and information on specified indicators to assess the implementation of a development intervention in relation to activity schedules and expenditure of allocated funds, and progress and achievements in relation to its intended outcome.
Evaluation is the periodic assessment of the design implementation, outcome, and impact of a development intervention. It should assess the relevance and achievement of the intended outcome, and implementation performance in terms of effectiveness and efficiency, and the nature, distribution, and sustainability of impact.
Identifying the basic purposes and scope of M&E. Describing the functions of an M&E plan. Identifying and understanding the main components of an M&E plan
Monitoring and evaluation is a vital component that determines the effectiveness of a corporation's assistance by establishing clear links between past, present and future initiatives and results. The process helps in improving the programme performance and achieving desired results. It provides opportunities for fine-tuning, re-orientation and planning of the programme effectively, without which it becomes impossible to measure the success and impact of the programme even if the approach is right.
This presentation explains the difference between Monitoring and Evaluation; the types of M&E frameworks; steps in logical framework and its difference from theory of change.
PPT with overall coverage of the project evaluation and all the topic of project evaluation and post project evaluation are covered in this ppt.It includes all the topic of project evaluation:-
=>which of the project should be evaluated?
=>cost&timing
=>social analysis
=>environmental analysis
=>progress report
=>final report
and many more topics are covered in this ppt for the brief description of project evaluation and some left out topics are numerical of project evaluation.
Monitoring and evaluation.
A presentation in Arabic/English prepared the Palestinian Center for Peace and Democracy (PCPD)
اعداد المركز الفلسطيني للسلام والديمقراطية
فلسطين , ديمقراطية , ديموقراطية , monitoring , elections, evaluation , politics
A series of modules on project cycle, planning and the logical framework, aimed at team leaders of international NGOs in developing countries.
Part 7 of 11.
There are two handouts to go with this module, Population Indicators, and a Logframe with blanks. http://www.slideshare.net/Makewa/population-indicators-handout and http://www.slideshare.net/Makewa/exercise-watsan-logframe-with-blanks
6 M&E - Monitoring and Evaluation of Aid ProjectsTony
A series of course modules on project cycle, planning and the logical framework, aimed at team leaders of international NGOs in developing countries.
This is part 6 of 11, beginning with 2 modules on leadership and conflict resolution, then 9 modules on project cycle management.
This module has 3 handouts and presenter notes as separate documents.
Sample Proposal: http://www.slideshare.net/Makewa/6-watsan-training-sample-proposal-09
Slides as a handout: http://www.slideshare.net/Makewa/6-me-handout
Presenter notes: http://www.slideshare.net/Makewa/6-module-6-presenter-notes
Monitoring is the continuous collection of data and information on specified indicators to assess the implementation of a development intervention in relation to activity schedules and expenditure of allocated funds, and progress and achievements in relation to its intended outcome.
Evaluation is the periodic assessment of the design implementation, outcome, and impact of a development intervention. It should assess the relevance and achievement of the intended outcome, and implementation performance in terms of effectiveness and efficiency, and the nature, distribution, and sustainability of impact.
Identifying the basic purposes and scope of M&E. Describing the functions of an M&E plan. Identifying and understanding the main components of an M&E plan
Monitoring and evaluation is a vital component that determines the effectiveness of a corporation's assistance by establishing clear links between past, present and future initiatives and results. The process helps in improving the programme performance and achieving desired results. It provides opportunities for fine-tuning, re-orientation and planning of the programme effectively, without which it becomes impossible to measure the success and impact of the programme even if the approach is right.
This presentation explains the difference between Monitoring and Evaluation; the types of M&E frameworks; steps in logical framework and its difference from theory of change.
PPT with overall coverage of the project evaluation and all the topic of project evaluation and post project evaluation are covered in this ppt.It includes all the topic of project evaluation:-
=>which of the project should be evaluated?
=>cost&timing
=>social analysis
=>environmental analysis
=>progress report
=>final report
and many more topics are covered in this ppt for the brief description of project evaluation and some left out topics are numerical of project evaluation.
Monitoring and evaluation.
A presentation in Arabic/English prepared the Palestinian Center for Peace and Democracy (PCPD)
اعداد المركز الفلسطيني للسلام والديمقراطية
فلسطين , ديمقراطية , ديموقراطية , monitoring , elections, evaluation , politics
An Introduction to Monitoring and Evaluation of Healthcare Projects. Monitoring and Evaluation is an integral component for the success of any donor-funded project as it provides accountability, and well-informed decisions through the use of data and plan that guides implementation
During this masterclass, participants will delve into the fundamental concepts, tools, and techniques of project monitoring and evaluation. Through interactive discussions, case studies, and practical exercises, attendees will gain a comprehensive understanding of MEAL principles and their application in diverse project contexts.
Key Objectives
Understand the importance of project monitoring and evaluation in ensuring project success.
Learn how to develop and implement effective monitoring and evaluation frameworks.
Explore various data collection methods and analysis techniques for monitoring and evaluation purposes.
Gain insights into utilizing monitoring and evaluation findings to inform decision-making and improve project outcomes.
Learning Outcomes: By the end of the masterclass, participants will able to:
Define key concepts related to project monitoring and evaluation.
Develop a monitoring and evaluation plan tailored to specific project requirements.
Apply appropriate data collection methods and tools for monitoring and evaluation activities.
Utilize monitoring and evaluation findings to enhance project performance and impact.
Why Attend:
Enhance your professional skills: Acquire practical knowledge and skills in project monitoring and evaluation that can be applied across various sectors and industries.
Boost career prospects: Gain a competitive edge by adding project monitoring and evaluation expertise to your skill set, making you a valuable asset to organizations and employers.
Network with industry professionals: Connect with like-minded individuals, experts, and practitioners in the field of project management and evaluation, fostering valuable relationships and potential collaborations.
Stay updated with industry trends: Learn about the latest trends, best practices, and emerging technologies in project monitoring and evaluation, ensuring you stay ahead in your professional journey.
How to write an development project evaluation report. Format and principle guidelines for mid-term and for completed projects. This format can be used for any kind of development project.
This presentation is all about the project Management which includes level of success of a project, Monitoring & evaluation, LFA in view of development sector. This presentation has been prepared in view of development/Social or Non-profit sector.
Note: Any kind of feedback from industry experts will always be appreciated.
A Good Program Can Improve Educational Outcomes.pdfnoblex1
We hope this guide helps practitioners and others strengthen programs designed to increase academic achievement, ultimately broadening access to higher education for youth and adults.
We believe that evaluation is a critical part of program design and is necessary for ongoing program improvement. Evaluation requires collecting reliable, current and compelling information to empower stakeholders to make better decisions about programs and organizational practices that directly affect students. A good evaluation is an effective way of gathering information that strengthens programs, identifies problems, and assesses the extent of change over time. A sound evaluation that prompts program improvement is also a positive sign to funders and other stakeholders, and can help to sustain their commitment to your program.
Theories of change are conceptual maps that show how and why program activities will achieve short-term, interim, and long-term outcomes. The underlying assumptions that promote, support, and sustain a program often seem self-evident to program planners. Consequently, they spend too little time clarifying those assumptions for implementers and participants. Explicit theories of change provoke continuous reflection and shared ownership of the work to be accomplished. Even the most experienced program planners sometimes make the mistake of thinking an innovative design will accomplish goals without checking the linkages among assumptions and plans.
Developing a theory of change is a team effort. The collective knowledge and experience of program staff, stakeholders, and participants contribute to formulating a clear, precise statement about how and why a program will work. Using a theory-based approach, program collaborators state what they are doing and why by working backwards from the outcomes they seek to the interventions they plan, and forward from interventions to desired outcomes. When defining a theory of change, program planners usually begin by deciding expected outcomes, aligning outcomes with goals, deciding on the best indicators to evaluate progress toward desired outcomes, and developing specific measures for evaluating results. The end product is a statement of the expected change that specifies how implementation, resources, and evaluation translate into desired outcomes.
Continuously evaluating a theory of change encourages program planners to keep an eye on their goals. Statements about how and why a program will work must be established using the knowledge of program staff, stakeholders, and participants. This statement represents the theory underlying the program plan and shows planners how resources and activities translate to desired improvements and outcomes. It also becomes a framework for program implementation and evaluation.
Source: https://ebookscheaper.com/2022/04/06/a-good-program-can-improve-educational-outcomes/
Unit 8 - Information and Communication Technology (Paper I).pdfThiyagu K
This slides describes the basic concepts of ICT, basics of Email, Emerging Technology and Digital Initiatives in Education. This presentations aligns with the UGC Paper I syllabus.
Students, digital devices and success - Andreas Schleicher - 27 May 2024..pptxEduSkills OECD
Andreas Schleicher presents at the OECD webinar ‘Digital devices in schools: detrimental distraction or secret to success?’ on 27 May 2024. The presentation was based on findings from PISA 2022 results and the webinar helped launch the PISA in Focus ‘Managing screen time: How to protect and equip students against distraction’ https://www.oecd-ilibrary.org/education/managing-screen-time_7c225af4-en and the OECD Education Policy Perspective ‘Students, digital devices and success’ can be found here - https://oe.cd/il/5yV
We all have good and bad thoughts from time to time and situation to situation. We are bombarded daily with spiraling thoughts(both negative and positive) creating all-consuming feel , making us difficult to manage with associated suffering. Good thoughts are like our Mob Signal (Positive thought) amidst noise(negative thought) in the atmosphere. Negative thoughts like noise outweigh positive thoughts. These thoughts often create unwanted confusion, trouble, stress and frustration in our mind as well as chaos in our physical world. Negative thoughts are also known as “distorted thinking”.
The French Revolution, which began in 1789, was a period of radical social and political upheaval in France. It marked the decline of absolute monarchies, the rise of secular and democratic republics, and the eventual rise of Napoleon Bonaparte. This revolutionary period is crucial in understanding the transition from feudalism to modernity in Europe.
For more information, visit-www.vavaclasses.com
The Art Pastor's Guide to Sabbath | Steve ThomasonSteve Thomason
What is the purpose of the Sabbath Law in the Torah. It is interesting to compare how the context of the law shifts from Exodus to Deuteronomy. Who gets to rest, and why?
Model Attribute Check Company Auto PropertyCeline George
In Odoo, the multi-company feature allows you to manage multiple companies within a single Odoo database instance. Each company can have its own configurations while still sharing common resources such as products, customers, and suppliers.
1. PREPARED AND PRESENTED BY
DR ZABRON KENGERA
GEOGRAPHY DEPARMENT
UNIVERSITY OF DARES SALAAM
2. The status of M&E Globally and Tanzania in
particular
The Paris Declaration(2005) emphasis the
need for managing for change/results
The need for result oriented reporting and
assessments
However, many countries especially in sub-
Saharan Africa lack the necessarily capacity
to monitor the development progress and
use findings to improve the performance of
various sectoral interventions
3. The study by World bank (2006) noted
institutional and individual gap for M&E
capacity
Many countries do not have the demand for
M&E
Those who have it have not been able to
develop a comprehensive and systematic
M&E
4. Yet still the focus has been more on
individual with lack of integration at
organizational level
Thus, comprehensive training is needed to
feel the gap in M&E capacity
5. Limited teaching of project management as
academic and professional discipline
In Tanzania, PPM looks like new discipline
and profession with no advocate
No ownership of the field, more of a tool
than profession.
Difficulties in Mainstreaming of its teaching
As the result we are lacking both PPM and
M&E experts.
Some organizations/individuals think M&E is
unnecessarily and expensive exercises
6. Some believe it is something they can simply
avoid
While some compare Monitoring with
Supervision, some think Project Evaluation is
more of a postmortem of the project.
Other managers are not even aware of the
relevancy and the need of M&E in their
Organizations.
7. Poor project implementations
Misuse and sometimes duplication of
resources
Lack of accountability
Implementation for its own sake
Poor project impact
Irrelevant and poor acceptability of
projects.
Lack of trust and support from other
development stakeholders including
donars and project beneficiaries
8. • Monitoring is the regular and ongoing
collection and analysis of information on the
progress of the project and what difference
the project is making.
OR
• Monitoring is the routine collection and
analysis of information to track progress
against set plans and check compliance to
established standards.
• It helps the project team to keep focused
and energised and to ensure that they are
on track in achieving their objectives.
9. It reminds the project officials on whether
they are carrying out their chosen
strategies and actions effectively.
It identify areas that need to be adapted or
changed.
It helps identify trends and patterns, adapt
strategies and inform decisions for
project/programme management.
10. Questions (present continuous tense)
Are the inputs(finance , materials and
personnel) available at the right amount and
time?
Are activities leading to the expected
outputs
Are they implemented as proposed
Are there factors which stall the progress?
Are outputs leading to the outcomes?
11. How do beneficiaries feel about the work-
What is causing delays or unexpected results
Are we doing the right thing to the right
beneficiaries (approach and methodology)
Is there anything happening that should
lead management to modify the operation’s
implementation plan
Are activities being implemented on schedule
and within budget?
12. Results monitoring
Tracks effects and impacts.
This is where monitoring merges with
evaluation to determine if the
project/programme is on target towards its
intended results (outputs, outcomes, impact)
Whether there may be any unintended
impact (positive or negative).
13. Process (activity) monitoring
Tracks the use of inputs and resources, the
progress of activities and the delivery of
outputs.
• It examines how activities are delivered –
the efficiency in time and resources.
• It is often conducted in conjunction with
compliance monitoring and feeds into the
evaluation of impact.
14. For example, a water and sanitation project
may monitor that targeted households
receive septic systems according to
schedule.
Compliance monitoring-ensures
compliance with donor regulations and
expected results, grant and contract
requirements, local governmental
regulations and laws, and ethical standards.
For example, a shelter project may monitor
that shelters adhere to agreed national and
international safety standards
15. Context (situation) monitoring
Tracks the setting in which the
project/programme operates, especially as it
affects identified risks and assumptions, but
also any unexpected considerations that may
arise.
• It includes the field as well as the larger
political, institutional, funding, and policy
context that affect the project/programme.
16. For example, a project in a conflict-prone
area may monitor potential fighting that
could not only affect project success but
endanger project staff and volunteers.
Beneficiary monitoring- tracks beneficiary
perceptions of a project/programme. It
includes beneficiary satisfaction or
complaints with the project/programme,
including their participation, treatment,
access to resources and their overall
experience of change.
17. Financial monitoring- accounts for costs by
input and activity within predefined
categories of expenditure. It is often
conducted in conjunction with compliance
and process monitoring
Organizational monitoring- tracks the
sustainability, institutional development and
capacity building in the project/programme
and with its partners..
18. It is often done in conjunction with the
monitoring processes of the larger,
implementing organization
19. Systematic collection and analysis
Good design- guided by clear purpose,
methodology type of information and
which indicators
Monitoring needs to be timely, so
information can be readily used to inform
project/programme implementation.
Focus on the results- Not monitoring for
its own sake look at whether the project
is yielding the intended results.
20. Regular visits
Aim to inform decision and improve project
performance(relevant and useful)
Assess the relevancy and performance
Participation of key stakeholders to increase
ownership and usefulness of the information
21. 1. Progress reports - Obtain and analyze project
documents to get information on progress.
2.Workplans- The extent to which they have been
implemented and how do they reflect the
project objectives and activities.
3. Participation(Stakeholder meetings)- Getting
feedback from partners and beneficiaries of the
project progress.
4. Field Visits- Validation, triangulation and get
first hand information over the conditions and
trends of changes following the introduced
intervention. Supplement day to day monitoring
done by administrators.
5. Indicators and Data collection instruments ie
questionnaire and checklists
22. An Internal activity normally done by
project staffs for internal use
Is an essential part of good day-to-day
management practice
Is concerned with verifying that project
activities are being undertaken services are
being delivered, and the project is leading to
the desired behavior changes described in
the project proposal.
Focuses more on inputs, activities and
outputs
23. External oriented normally done by
external consultants and experts.
Is an essential activity in a longer-term
dynamic learning process
Focuses more on in-depth information
outcomes and impacts
Normally challenges the design
Periodic and uses more of past tense.
Learning and sharing of information with
other stake
Improve and inform future
project/plans(postmotam)
24. Relies on more detailed data from surveys or
studies) in addition to that collected through
the monitoring system to understand the
project in greater depth.
Assesses higher level outcomes and impact
and may verify some of the findings from the
monitoring.
It explore both anticipated and
unanticipated results.
25. Evaluation reports/studies may form a
foundation a starting point (baseline
information) for monitoring changes
immediately after project implementation
Through the results of periodic evaluations,
monitoring tools and strategies can be
refined and improved further.
26. M & E are two different management
tools that are closely related, interactive
and mutually supportive
Overlaps- Process based evaluation is
equated to Monitoring
Through routine tracking of project
progress, monitoring can provide
quantitative and qualitative data useful
for designing and implementing project
evaluation exercises
27. Evaluation is an assessment, as systematic and
objective as possible, of an ongoing or
completed project, programme or policy, its
design, implementation and results.
While, Monitoring aims at tracking changes in
program results over time, Evaluation - seeks
to understand specifically why these changes
occur
The aim is to determine the impact, relevance
efficiency, effectiveness, perceptions of the
beneficiaries, participation and sustainability
of the project activities and outcomes.
28. It therefore provide information that is
credible and useful, enabling the
incorporation of lessons learned into the
decision-making process to the key
stakeholders
29. Evaluations involve identifying and reflecting
upon the effects of what has been done, and
judging their worth.
Their findings allow project/programme
managers, beneficiaries, partners, donors
and other project/programme stakeholders
to learn from the experience and improve
future interventions.
30. Evaluation is closely related to
monitoring but includes taking a
more in-depth look at the
outcomes or impact of a piece of
work and the extent to which the
stated objectives have been
achieved at a particular point in
time (e.g. at the mid-point in a
project and after its completion).
31. Evaluation helps the project team to assess
whether:
They are working on the right issue,
Their selected goals and objectives, the
strategies and the underlying assumptions,
and how efficient the project work has
been and whether they have used the
project resources wisely (Chapman, 2006).
32. Learning from the project failures and
success, can provide valuable insights into
what works and what does not.
M&E should also include analysing how
changes in the external context might have
influenced the project.
33. Like any research activities, evaluations should
be guided by the following principles/ethics
Objectivity- Evaluation should reveal and convey
technically adequate information about the
features that determine worth or merit of the
program being evaluated.
Systematic enquiry analysis and reporting
Useful (No evaluation is good unless results are
used)
Cost effective- efficient use of resources-looks at
the availability, relevant and focused
information.
Respectful- Respect the culture of the
respondents
Consideration of the respondents: Develop
interests in their responses and views: Don’t
34. Garbage in garbage out -Explain the purpose
and procedures you have passed through
(methodology)
Ensure that an evaluation will be conducted
legally, ethically, and with due regard for
the welfare of those involved as well as
those affected by its results.
Collective responsibility
Protect your respondents(Anoimity)
Ensure that an evaluation will be realistic
and practical
35. Evaluation responds to the concerns of one
interest group more than another
Biasness of selecting of the respondents
Respondents are influenced either with
politics and personal differences (lacks
objectivity)
Evaluator conducts an evaluation when
he/she lacks sufficient skills or experience.
36. Impact
What changes did the project bring about?
Were there any unplanned or unintended
changes?
Effectiveness
Were the operation’s objectives achieved?
Did the outputs lead to the intended
outcomes
37. Effectiveness: Goal attainment. The extent to
which the project has been able to attain the
intended objectives(Percentage of
population reached, number of say terraces
constructed:
There is no universally accepted standard of
the percentage but there are views that at
least 70%.The remaining can be explained by
possible externalities (political, physical and
other factors beyond the control of the
project management
We normally don’t expect such a perfect
project model. There are however, several
cases where projects have over-performed.
38. Efficiency
Were stocks of items available on time
and in the right quantities and quality?
Were activities implemented on
schedule and within budget?
Were outputs delivered economically?
Feasibility and Practicability:
Environment/physical factors, social-
cultural and economic factors,
technology, resources required and
operational costs
39. Relevance and legitimacy
Were the operation’s objectives consistent with
beneficiaries’ needs and the organizational policies
Participation- Tells a lot about the extent of
benefits, ownership, relevancy and acceptability
The level and extent of community participation
Stage of participation and the nature of people
involved
Sustainability:
Are the benefits likely to be maintained for an
extended period after assistance ends?
By local people, local organization, capacity building,
participation, relevancy, practicability, alternative
funding
40. Policy issues: context and policy implication
and environment
Leadership and organizations: Institutional,
local capacity, qualified personnel
Financial and economic: Enough money, the
economic status of the beneficiaries
Technological factors: At the time of the
project implementation and during
evaluations
Social cultural factors: Did cultural factors
inform the project formulation and
implementation
41. 1. Timing of the evaluation(stage in the
PCM)
Ex-ante evaluation- Before project
evaluation. It includes assessments of the
project proposal as a part of project appraisal
through EIA, CBA, SEA and CV
Formative evaluations(process/activity)-
occur during project/programme
implementation to improve performance and
assess compliance(observes schedule, target
group, budget and deviations).
Midterm evaluations are formative in
purpose and occur midway through
implementation.
Summative evaluations- occur at the end of
project/programme implementation to assess
effectiveness and impact
42. Final evaluations are summative in purpose
and are conducted (often externally) at the
completion of project/ programme
implementation to assess how well the
project/programme achieved its intended
objectives.
Ex-post evaluations are conducted some
time after implementation to assess
longterm impact and sustainability.
43. • Can we include Baseline survey on this
category?.
• Baseline information set the foundation of
both monitoring and evaluation
• It informs the evaluation team of the
situation and conditions of population and
geographical areas prior to the interventions
being evaluation.
• We should therefore distinguish Baseline
from situational analysis and social economic
survey
44. Internal or self-evaluations- are conducted
by those responsible for implementing a
project/programme.
External or independent evaluations- are
conducted by evaluator(s) outside of the
implementing team, lending it a degree of
objectivity and often technical expertise.
-These tend to focus on accountability.
45. What is your opinion. Would you recommend
for internal or external and why?
What are the basis of your arguments:
o Objectivity and biasness
o Cost effectiveness
o External Vs internal influence and control
o Relevancy (to whom)
46. Participatory evaluations are conducted
with the beneficiaries and other key
stakeholders, and can be empowering,
building their capacity, ownership and
support
Joint evaluations are conducted
collaboratively by more than one
implementing partner to build consensus at
different levels, credibility and joint support
47. Meta-evaluations are used to assess the
evaluation process itself.
Thematic evaluations focus on one theme,
such as gender or environment, typically
across a number of projects, programmes or
the whole organization
Cluster/sector evaluations focus on a set of
related activities, projects or programmes,
typically across sites and implemented by
multiple organizations (e.g.
48. Impact evaluations focus on the effect of a
project/programme, rather than on its
management and delivery
49. 1. Strategic level:
Looks at long-term objective of the
project,
external environment and
Resource allocation
2. Tactical(management)
Effienciency and Resource use
3. Operational level
Implementation
Time schedule and budget
50. Track progress of the project.
Identify problems/mistakes earlier and
remedy and avoid them
Improve project performance
Identify opportunities for future project
implementation
Help decision-making and generate
insights and learning on which strategies
and approaches are more or less
effective in different contexts and
circumstances
51. Clarify performance, costs and time
relationships
Track and take account of the changing
context, especially the attitudes of
stakeholders.
Respond timely and adequately to the
project threats
Provide information to stakeholders
52. Ensure effective and efficient use of
resources.
Ensure accountability to various
stakeholders.
Keep focused on the broader picture and
long-term goals to ensure you are
working towards these, rather than just
focusing on activities
Generate useful evidence and data to
help support and strengthen your project
53. Learning and knowledge sharing
Provide feedback to stakeholders
Demonstrate to internal and external
stakeholders over the impacts of the project
to socio-economic wellbeing
Promote and celebrate our work
54. Of course, there are lot commonalities
between Evaluation and research
especially on systematic investigation.
Fundamental differences
Research generally is conducted to
produce knowledge that is generalizable
across different programs
Research seek to prove, Evaluation seek
to improve(Michael, Patton, 2002)
55. Evaluations are conducted to generate
findings that are intended for use by the
specific programs under which evaluations
are conducted.
In research, the questions for investigation
are researcher derived.
Questions for evaluation, on the other
hand, are derived from the program itself or
its stakeholders
In research, the role of a researcher is more
defined and clear.
Research is conducted in a more controlled
environment as compared with evaluation
which may need to analyze the context in
which the project is being implemented
56. Evaluators, on the other hand, may often
have role conflicts as they may be a part
of the program in which the evaluation is
being conducted.
Results of the research benefits small
audience as compared with the
evaluation results
Published vs Often not Published
57. Although research and evaluation are
very similar and both use systematic
methods, their intent and purposes
differ.
As Patton says, “Research seeks to prove,
evaluation seeks to improve…”
Evaluation involves looking closely at the
operations of programs or program
initiative, and with the understanding
gained from this examination, make
recommendations for improving the
program.
Intention for doing evaluation is to help
improve programs and not to blame or
criticism or seek to eliminate something.
58. Can we compare and contrast the following:
Are how are they related and what are their
differences. Definitely they have different
roles in managing the project
Project Monitoring and Evaluation
Research
Project supervision
Project auditing
Project tracking
Quality control
59. Quality and availability of M and E experts
Perceptions from Employers(Resource waste and
consuming)
The demand for M and E is still low
M and E training is still at infancy stage.
M &E as surplus activity(something we can do
without)
M &E as experts are being seen as auditors(bring
troubles to project managers
60. Monitoring and evaluation is an on going
activity.
It exists in all stages of the PCM
Good M&E should start at the design and
planning stage of a programme or project.
Off course M&E alone can not solve
weak strategies and project design.
However, It is very hard to carry out M
and E without a good planning
61. Identify the key issues and root of the
problems you want to address (your
situation or problem analysis)
Identify the key changes that you want
Develop effective strategies to get what
you want
Design ways to monitor their progress
Determine what resources and knowledge
are required
Ensure that your work is cost effective.
62. 1. Initial Assessments- Initial needs
Assessments, stakeholder analysis
2. Planning- Project design(LFA), M&E plan,
Baseline study(project start)
3. Implementation- Process and Midterm
and end line(final evaluation)
4. Post implementation- Impact study or
evaluation for either dissemination, use
of lessons and possible longitudinal
evaluation/longtidinal studies.
63.
64. Like other stages of the project cycle, M&E
passes through the following stages
Plan/decision
Implementation
Reflect/learn/decide/adjust
Implement
Monitor
Reflect and adjust
Implement
Evaluate/learn/decide
65. What does this suggest?:
• M & E is an endless process
• It is not a one time thing
• Planning for M & E should start at the very
beginning of the project
• There are different types, forms and objectives
for M & E
• M and E information may differ according to the
stage and objective.
66. An evaluation plan typically includes descriptions of
the following:
Purpose of program(Brief description of the project)
The objective evaluation
Evaluation questions/issues
Data collection plans
Data analysis plans
Dissemination and reporting activities
Other evaluation products
Timeline and budget
Staff responsible for each evaluation activity
67. What the project is going to achieve, from
the overall goal to specific objectives and
outputs
What activities will be carried out to
achieve its outputs and purpose
What resources (inputs) are required
How the progress and ultimate success of
the project will be measured and verified
(including qualitative and quantitative
indicators)
68.
69. Understanding the system approach
Difference views of an M&E system
Levels of an M&E system
A good M&E system
Components of an M&E system
Comparing private and public sector
Limitations/problems of M&E system in the
public sector.
70. Systems approach
System is a group of interconnected
and interrelated components to form a
whole (Senge, 1990)
From a system thinking approach-
interrelated parts/elements.
Example of systems: Ecological systems,
body systems, car systems and in our case
M&E system.
71. What is common to all systems is that no
part/element will work independently.
M&E system are critical parts/
constituents/components which individually
would not function effectively unless they
are put together.
In a very simplified language M&E system in
meant to capture, manage, analyze, report
and use and share information.
72. Narrow view:
M&E as a series of data collection tools
designed to gather information and summary
of progress against a pre-defined set of
objectives and indicators.
Wide scope
M&E system to cover elements such as
people, processes , baseline studies,
reporting, learning mechanisms and data
storage.
74. More than a list of ‘indicators’ !
Clarity on who is going to use the information for
what kinds of decision making
System is ‘usable’ for the level of decision
making
The system is ‘operational’ – clear on who
is to collect and report what data by
when and to whom
Sustainability-bugdet, human resources and
commitments and incentives to implement.
75. There are no standardized and universally
agreed components of the M&E system.
A well functional M&E system may
compose the following components.
Components/elements of the M&E system
The status and Comparison of an M&E
system in Private and Public sector
Limitations/problems of M&E system in
Tanzania.
Good M&E system
M&E plan- role, components and features
76. There are might be no universally
accepted and standardized of what
exactly constitutes an M&E systems.
There could be also different in wording
and terminologies of some of the agreed
components of the system.
Mixed grill- Components and
elements?.They may have different
connotation and interpretations
Some have gone further to suggest even
the number of elements/components of
comprising of an M&E system
77. Many writers seem to suggest that there are
12 of components/elements of a functional
M&E system
There could raise one practical question on
how did they arrive to this figure.
There are observations for example that
some of the components INCLUDED in the
M&E system are also included in the M&E
plan.
The fact that an M&E plan is one of the M&E
system elements, makes it a bit difficult to
suggest the number and exact elements of
the system
78.
79. But generally a functional M&E system may
require the following elements.
Organizational structure with M&E unit
which coordinates all the M&E fund, internal
organs or outsourcing
People- Human element (human capacity)
adequate staffs with necessary technical
know how and experience. An M&E unit may
consist of M&E manager, officers, stastitian,
IT and field officers?
80. M&E plan- A plan which shows the tactics
of how to operationalize the system
Costed M&E work plan a more detailed
plan to make M&E plan operational with
(budget for each activity.
M&E framework/Logical Framework(at
project level) which outlines the
objectives, outputs, activities , outputs
and inputs. This may as well help to
develop indicators and risk management
81. Data base/Data storage - Tracking system
including how data is stored and retrieved at
different level
Indicators and Targets
Baseline information- to act as a yardstick
on assessing the progress towards success
82. Supporting processes- including training
people, supervision, information flows
between different people, review of the
information, reporting of mistakes and
failures.
Communication, advocacy and culture for
M&E-Policies and strategies which promotes
M&E continuity in an organization
83. Mechanisms for data capturing, analysis at
different levels(Survey and surveillance)
Participation: Who participates in the M&E
process, how and why
Reporting system- who reports to whom
including different reports generated at
different level
Data use, dissemination and learning: Use
the M&E information for decision making and
develop mechanisms for information sharing.
84. M&E as a displines and profession is still at
infancy stage.
For reasons we discussed earlier
Discussions with a few experts and
practitioners in this field seem to suggest
the followings:
85. Comparatively, M&E systems are more
visible in the private/NGOs than the
public sectors.
More financial control with a few other
elements of monitoring and evaluation.
The nature of employment is more
performance based as compared to
indefinite employments and managerial
post acquisition in the government
sector.
Yet still the M&E system is still not
effective as it lacks a lot of important
86. Government officials are not ready to
be measured.
Poor functioning of OPRAS can serve a
good case to illustrate this argument.
OPRAS in many cases have been frustrated
and its seems to be a bit in practical.
The government was trying to introduce
the OPEN PERFOMANCE MANAGEMENT-
Ones salary could be raised based on his
performance.
Many field these forms because it could
raise their salaries.
87. Lack of capacity and political will- very
little budget capacity development unless
there is donar funding.
It is only recently the government have
demanded every ministry to have M&E units
within the Planning and policy departments.
The majority are economists probably with
very little or no expertise in monitoring and
evaluation
)
88. Lack motivation- No resources, resources
are not coming on time therefore no need
to measure(no enabling environment
Long processes of getting resources:
Funds are not reliable so difficult to
implement and measure according to
plans/less control of the process
So many
contingencies/unplanned/political
interference and competing
priorities(Dengue and other disasters
which may need to be attended using the
89. Laxity – for so several reasons including
corrupt environment and lack of
performance based culture.
Many M&E systems are donar forced
Lack of tools in many daily operations
Lack of common tools for assessments-
causes different understandings of the
same results
Data related problems- accessibility,
quality and cleaning
Technological factors – Data base
90. It is tool and strategy for operationalizing
the M&E system.
While the M&E system in normally
instituted at organizational level, an M&E
plan normally works at project level.
Used as a guide of what you should
evaluate, what information you need and
who you are evaluating for
The plan outline the key evaluation
questions
91. Depending on the detail, the evaluation
plan can be useful in identify the people
responsible for different evaluation tasks
at different stages of the project
The plan need to be well articulated so
that it can be implemented by anybody at
anytime.
Normally prepared at the beginning of the
project
To allow the project staff to plan ahead
of time
92. Information needs: What, who and why
Information source: Methods, frequency,
location of data to be collected
Responsibility for MER- Who is
responsible
What indicators should be used to
measures and Monitor each stage of the
project.
How should the information be collected
How to Measure quality and effectiveness
How and when to communicate the
93. i) Identify your evaluation audience: Who
are you evaluation for and for what
purpose.
ii) Define the evaluation questions
-Process- how well was the project
designed and implemented
-Outputs- Expected goods and services
-Outcome- To what extent did it meet the
overall needs, how valuable are the
outcomes.
-Learning- What worked well and what did
94. iii) Identify the monitoring questions(more
specific question deducted from the
evaluation question
iv) Identify the indicators and data sources
- What information(indicator) do you need
to answer the Monitoring and evaluation
questions and where do you get this
information(data sources)
V) Identify who is responsible for data
collection and timeline
95. Vi) Identify who will evaluate the data and
how will it be reported and how
- Option although highly recommended
Vii) Review the M&E plan
- Highlight the data sources
- Reordering the plan in several ways
including data sources, data collection
timeline and framework
- Select /prioritize some question for
budgeting purposes
- Reduce collection of unused information
96. User friendly
Implementable
Monitor the use of the project outputs
Monitor the effectiveness of the project
Monitor the production/process of the
project outputs
Asses the project inputs
Assess the effectiveness and relevancy of
the project outputs
Asses the extent to which the observable
impacts can be attributed to the project
97.
98. • Appreciate and demonstrate how Logic
models and theories can guide
Evaluation and project designing.
• Discuss the relevancy of at least two
common Logic models( Project
Conceptual Models and Logical
Framework) and Theory of change.
• Understand major variable and
components of these models/theories.
. Understand the differences and
complementarities between Logic
models and Theory of change.
99. A logic is a set of rules or relationships that
govern behaviour.
Explain relationship between elements and
between an element and the whole.
The term logic model/Models is used as a
generic label to depict ways of displaying the
intervention and how such intervention will
bring the desired change.
Like a road map, a logic model shows the route
travelled (or steps taken) to reach a certain
destination
100. They are narrative or graphical
depictions of processes in real life that
communicate the underlying assumptions
upon which an activity is expected to lead
to a specific result.
In program or project management logical
models demonstrate how an
intervention (a project, a program, a
policy, a strategy) is likely to contribute
to possible or actual impacts.
101. They describe linkages among program
resources, activities, outputs, and audiences,
and highlights different orders of outcomes
related to a specific problem or situation
They express the thinking behind an initiative's
plan.
Also explain why the program ought to work,
why it can succeed where other attempts have
failed.
102. They normally provide direction and clarity
by presenting the big picture of change along
with certain important milestones.
Initially used mainly for Program Evaluation
but increasingly used for project planning
and implementation
Used during different stages of the program
for new and existing programs.
During planning , a logic models can help
to clarify program strategy, development
M&E systems, develop evidence for
measuring change(indicators), identify
appropriate outcome targets, set priorities
for allocating resources and timelines
103. During implementation:
They help to describe, modify or enhance
the program
Provide an inventory of what you have and
what you need to operate the program or
initiative, develop a management plan
Make mid-course adjustments
104. During evaluation to:
Reveal information needs and provide a
framework for interpreting results
Document accomplishments
Provide evidence about the program
Identify differences between the ideal
program and its real operation
Frame questions about attribution (of cause
and effect) and contribution (of initiative
components to the outcomes)
Tell the story of the program or initiative
105. Program logic/ logic model/
intervention logic
Program theory/theory of
change/model of change,
Causal model/results chain/ causal
chain/chain of causation
Road map/ conceptual map/
pathways map,
Mental model, blueprint for change,
framework for action/ program
framework
Program hypothesis/ theoretical