Project Monitoring and Evaluation
LECTURER : MUSYOKI MUSYOKA
ABAARSO TECH UNIVERSITY
Introduction to monitoring and Evaluation
MUSYOKI MUSYOKA
2
Definitions
Monitoring is the routine reporting of data on program
implementation and performance
Evaluation is the periodic assessment of program impact at the
population level and value
Monitoring
 Has the program been implemented according to the plan?
 Are there any changes in program resources or service utilization?
 Are there any weaknesses in the implementation of the program?
 Where are the opportunities to improve program performance?
Monitoring
• Continuous internal management activity
• Ensures that project is on track
• Measures progress towards objectives
• Identifies problems
Project monitoring activities can include;
• Tracking project milestones and deliverables
• Checking the project’s performance is on track to meet goals,
objectives, and
• KPIs, and developing performance metric reports
• Monitoring project manager performance metrics, so nothing falls
through the cracks
Project monitoring activities can include;
• Checking the project schedule and timeline is on track
• Assessing the project budget and costs compared with the forecasts
• Staying on top of the project scope and making sure scope creep doesn’t
happen
Project monitoring activities can include;
• Carrying out an overall quality control assessment, and conducting
quality reviews (and creating reports)
• Watching out for any general issues that arise, and building an issue log
• Conducting risk assessments and producing risk management plans
• Setting up progress meetings and conducting status reports and reviews
Evaluation
 Are there any changes in behavior or health outcomes in the target
population?
 To what extent are observed changes in the target population related to
program efforts?
Evaluation
• Assessing whether a project is achieving its
intended objectives
• Conducted periodically
• Internal or external
• Focuses on outcomes and impacts
Monitoring vs. Evaluation
Level of measurement:
Monitoring Data come from routinely reported data
Example: Monthly service statistics
Evaluation Data are measured at the population level
Example: Prevalence of MDR-TB in Country X
Monitoring Vs Evaluation
​ Monitoring​ Evaluation​
Frequency​ Periodic, regular​ Periodic, episodic​
Main function​ Monitoring, supervision Evaluation, analyse​
Main purpose
Improving efficiency, adapting the action
plan
Improving efficiency, impact and future
programs
Focuses on
Inputs and products, deliverables,
processes
Efficiency, relevance, impact,
efficiency, sustainability
Sources of
information
Routine systems, field observations,
activity reports, rapid evaluation
Same sources - questionnaires,
studies, interviews
​
Led by
Project managers, volunteers, donors,
supervisors
Program managers, donors,
supervisors, external evaluators
Interdependence btn Monitoring & Evaluation
Monitoring Evaluation
Clarifies the objectives of the
program
Analysis of why results were achieved or not achieved
Links activities and their
resources to objectives
Assesses cause and effect between activities and results
Translates objectives into
performance indicators and sets
targets
Examines implementation processes
Regular data collection on these
indicators, comparing actual
results to targets
Explores unins anticipated outcomes
Reports progress to managers
and alerts them to issues
Extracts lessons, focuses on significant achievements or potential of the program
and offers recommendations for improvement
Uses of M&E for program management
 Informs decisions about program operations and service delivery
 Ensures effective and efficient use of resources
 Determine whether or not the program is implemented
according to plan
Uses of M&E for program management
 Meets reporting requirements by different agencies and sectors of
government
 Evaluates the extent to which the program/project is having or has had the
desired impact
WHY IS M&E IMPORTANT?
• Tracking resources
• Feedback on progress
• Improving project effectiveness
• Informing decisions
• Promoting accountability
• Demonstrating impact
• Identifying lessons learned
WHY IS M&E IMPORTANT?
• Accountability
• Learning andAdaptation
• Decision-making
• Transparency and Communication
• Sustainability
Who needs, uses M&E Information?
To Improve program implementation…
To Inform and improve future programs
Inform stakeholders
 Managers
 Donors
 Governments
 Technocrats
 Donors
 Governments
 Communities
 Beneficiaries
Who conducts M&E….?
Program implementer
Stakeholders
Beneficiary
Remember ..
M&E Technical skills
Participatory process
Key Principles of M & E
• Participation and Stakeholder Engagement
• Adequate Planning and Design
• Data Collection andAnalysis
• Continuous Learning andAdaptation
• Credibility and Independence
• Utilization of Findings
Overview of the M&E Process
 Monitoring and Evaluation (M&E) is a crucial component of
project management aimed at assessing the progress, outcomes,
and impact of a project.
 It provides valuable insights into whether project goals and
objectives are being achieved effectively and efficiently.
Overview of the M&E Process
1. Definition of objectives
2. Identification of indicators
3. Data collection
4. Data analysis
5. Reporting and feedback
6. Learning and adaptation
7. Utilization of findings
8. Iterative process
1. Definition of objectives
 The M&E process begins with clearly defining the project's
objectives, which serve as benchmarks for measuring success.
 These objectives should be specific, measurable, achievable,
relevant, and time-bound (SMART).
2. Identification of indicators:
 Indicators are the quantifiable measures used to assess project
performance.
 They can be input indicators (related to resources invested), output
indicators (related to project outputs), outcome indicators (related
to immediate project outcomes), or impact indicators (related to
long-term effects).
 Selecting appropriate indicators is essential for accurate evaluation.
3. Data collection:
 Data collection involves gathering relevant information to measure
the predefined indicators.
 This can be done through various methods such as surveys,
interviews, focus groups, observations, and document reviews.
 It is crucial to ensure data reliability and validity during this stage.
4. Data analysis:
 Once the data is collected, it needs to be analyzed to uncover
patterns, trends, and insights.
 Statistical techniques, qualitative analysis, and data visualization
tools are commonly used for this purpose.
 The analysis helps identify strengths, weaknesses, and areas needing
improvement, enabling informed decision-making.
5. Reporting and feedback:
 The findings from the analysis are presented in comprehensive
reports to stakeholders.
 These reports include assessment results, insights, and
recommendations for project improvement.
 Feedback is sought from stakeholders to ensure that diverse
perspectives are considered, enhancing the rigor and accuracy of
the evaluation
6. Learning and adaptation:
 M&E is not merely an assessment tool, but also a learning process.
 The evaluation findings provide valuable lessons for project
managers, enabling them to adapt strategies and make informed
decisions.
 Continuous learning and adaptation enhance project effectiveness
and increase the likelihood of achieving desired outcomes.
7. Utilization of findings
 M&E findings should be actively used to inform decision-making,
policy development, and resource allocation.
 The insights gained through the evaluation process should guide
future project planning and implementation, ensuring better
project outcomes in subsequent stages.

Chapter 1 Introduction to Monitoing and evaluationpptx

  • 1.
    Project Monitoring andEvaluation LECTURER : MUSYOKI MUSYOKA ABAARSO TECH UNIVERSITY
  • 2.
    Introduction to monitoringand Evaluation MUSYOKI MUSYOKA 2
  • 4.
    Definitions Monitoring is theroutine reporting of data on program implementation and performance Evaluation is the periodic assessment of program impact at the population level and value
  • 5.
    Monitoring  Has theprogram been implemented according to the plan?  Are there any changes in program resources or service utilization?  Are there any weaknesses in the implementation of the program?  Where are the opportunities to improve program performance?
  • 6.
    Monitoring • Continuous internalmanagement activity • Ensures that project is on track • Measures progress towards objectives • Identifies problems
  • 7.
    Project monitoring activitiescan include; • Tracking project milestones and deliverables • Checking the project’s performance is on track to meet goals, objectives, and • KPIs, and developing performance metric reports • Monitoring project manager performance metrics, so nothing falls through the cracks
  • 8.
    Project monitoring activitiescan include; • Checking the project schedule and timeline is on track • Assessing the project budget and costs compared with the forecasts • Staying on top of the project scope and making sure scope creep doesn’t happen
  • 9.
    Project monitoring activitiescan include; • Carrying out an overall quality control assessment, and conducting quality reviews (and creating reports) • Watching out for any general issues that arise, and building an issue log • Conducting risk assessments and producing risk management plans • Setting up progress meetings and conducting status reports and reviews
  • 10.
    Evaluation  Are thereany changes in behavior or health outcomes in the target population?  To what extent are observed changes in the target population related to program efforts?
  • 11.
    Evaluation • Assessing whethera project is achieving its intended objectives • Conducted periodically • Internal or external • Focuses on outcomes and impacts
  • 12.
    Monitoring vs. Evaluation Levelof measurement: Monitoring Data come from routinely reported data Example: Monthly service statistics Evaluation Data are measured at the population level Example: Prevalence of MDR-TB in Country X
  • 13.
    Monitoring Vs Evaluation ​Monitoring​ Evaluation​ Frequency​ Periodic, regular​ Periodic, episodic​ Main function​ Monitoring, supervision Evaluation, analyse​ Main purpose Improving efficiency, adapting the action plan Improving efficiency, impact and future programs Focuses on Inputs and products, deliverables, processes Efficiency, relevance, impact, efficiency, sustainability Sources of information Routine systems, field observations, activity reports, rapid evaluation Same sources - questionnaires, studies, interviews ​ Led by Project managers, volunteers, donors, supervisors Program managers, donors, supervisors, external evaluators
  • 14.
    Interdependence btn Monitoring& Evaluation Monitoring Evaluation Clarifies the objectives of the program Analysis of why results were achieved or not achieved Links activities and their resources to objectives Assesses cause and effect between activities and results Translates objectives into performance indicators and sets targets Examines implementation processes Regular data collection on these indicators, comparing actual results to targets Explores unins anticipated outcomes Reports progress to managers and alerts them to issues Extracts lessons, focuses on significant achievements or potential of the program and offers recommendations for improvement
  • 15.
    Uses of M&Efor program management  Informs decisions about program operations and service delivery  Ensures effective and efficient use of resources  Determine whether or not the program is implemented according to plan
  • 16.
    Uses of M&Efor program management  Meets reporting requirements by different agencies and sectors of government  Evaluates the extent to which the program/project is having or has had the desired impact
  • 17.
    WHY IS M&EIMPORTANT? • Tracking resources • Feedback on progress • Improving project effectiveness • Informing decisions • Promoting accountability • Demonstrating impact • Identifying lessons learned
  • 18.
    WHY IS M&EIMPORTANT? • Accountability • Learning andAdaptation • Decision-making • Transparency and Communication • Sustainability
  • 19.
    Who needs, usesM&E Information? To Improve program implementation… To Inform and improve future programs Inform stakeholders  Managers  Donors  Governments  Technocrats  Donors  Governments  Communities  Beneficiaries
  • 20.
    Who conducts M&E….? Programimplementer Stakeholders Beneficiary Remember .. M&E Technical skills Participatory process
  • 21.
    Key Principles ofM & E • Participation and Stakeholder Engagement • Adequate Planning and Design • Data Collection andAnalysis • Continuous Learning andAdaptation • Credibility and Independence • Utilization of Findings
  • 22.
    Overview of theM&E Process  Monitoring and Evaluation (M&E) is a crucial component of project management aimed at assessing the progress, outcomes, and impact of a project.  It provides valuable insights into whether project goals and objectives are being achieved effectively and efficiently.
  • 23.
    Overview of theM&E Process 1. Definition of objectives 2. Identification of indicators 3. Data collection 4. Data analysis 5. Reporting and feedback 6. Learning and adaptation 7. Utilization of findings 8. Iterative process
  • 24.
    1. Definition ofobjectives  The M&E process begins with clearly defining the project's objectives, which serve as benchmarks for measuring success.  These objectives should be specific, measurable, achievable, relevant, and time-bound (SMART).
  • 25.
    2. Identification ofindicators:  Indicators are the quantifiable measures used to assess project performance.  They can be input indicators (related to resources invested), output indicators (related to project outputs), outcome indicators (related to immediate project outcomes), or impact indicators (related to long-term effects).  Selecting appropriate indicators is essential for accurate evaluation.
  • 26.
    3. Data collection: Data collection involves gathering relevant information to measure the predefined indicators.  This can be done through various methods such as surveys, interviews, focus groups, observations, and document reviews.  It is crucial to ensure data reliability and validity during this stage.
  • 27.
    4. Data analysis: Once the data is collected, it needs to be analyzed to uncover patterns, trends, and insights.  Statistical techniques, qualitative analysis, and data visualization tools are commonly used for this purpose.  The analysis helps identify strengths, weaknesses, and areas needing improvement, enabling informed decision-making.
  • 28.
    5. Reporting andfeedback:  The findings from the analysis are presented in comprehensive reports to stakeholders.  These reports include assessment results, insights, and recommendations for project improvement.  Feedback is sought from stakeholders to ensure that diverse perspectives are considered, enhancing the rigor and accuracy of the evaluation
  • 29.
    6. Learning andadaptation:  M&E is not merely an assessment tool, but also a learning process.  The evaluation findings provide valuable lessons for project managers, enabling them to adapt strategies and make informed decisions.  Continuous learning and adaptation enhance project effectiveness and increase the likelihood of achieving desired outcomes.
  • 30.
    7. Utilization offindings  M&E findings should be actively used to inform decision-making, policy development, and resource allocation.  The insights gained through the evaluation process should guide future project planning and implementation, ensuring better project outcomes in subsequent stages.

Editor's Notes

  • #5 Monitoring activities focus on the details of your program operation. The data that we need to conduct monitoring are often routinely reported because the information needed to monitor program operation is needed on a monthly, quarterly or yearly basis. Monitoring can help answer the following questions that program managers have: (Refer to slide) Details: Monitoring is the set of activities and data collection that allows an accurate understanding how the program is being implemented and whether or not the program is implemented according to the workplan. Monitoring activities can be used to detect changes in program implementation over time, for example, changes in service utilization that can be detected through routinely reported service delivery statistics. Monitoring may also be used to identify program strengths and weaknesses, such as gaps in program implementation or areas where the program is lagging behind expectations. With these data and routine reporting, program managers may also identify opportunities to improve program performance.
  • #6 SLIDE CONTENT: According to the World Bank, monitoring is a continuous internal management activity to ensure that project implementation is on track. It is the systematic measurement of progress toward desired objectives. It involves measuring inputs, activities and outputs, assessing whether they are contributing to achieving the project’s objectives and identifying any existing or potential problems. It helps ensure that the project achieves its defined objectives within a prescribed time frame and budget. TRAINER NOTE: Compare the definitions that participants provided during their brainstorm with the information provided on this and the next slide. For more information on M&E from the World Bank, see the following online training program: http://info.worldbank.org/etools/docs/library/192862/Module4/Module4a.html
  • #10 Evaluation activities focus on the impact of your program at the population level. True program evaluation involves the measurement of behaviors and/or health outcomes in your target population. Evaluation also implies that you are able to attribute changes in these outcomes to your program activities or interventions. Evaluation research requires a methodology or study design that allows program managers to measure the contribution of their efforts to changes in outcomes while considering the effects of other influences on those outcomes. For example, if an NTP manager wants to address delayed diagnosis of TB through improved training of front-line health workers, he or she would have to document a decrease in the average number of days between when TB suspects are identified and when they begin treatment. However, there may be other activities or influences on the diagnostic process that are not related to the improved training. So if the manager wants to know the contribution of the training activity, he or she will have to design a research project that allows him to control for these factors. This usually involves a controlled trial, which is costly and logistically difficult to implement. For this reason, we tend to focus more on monitoring and less on evaluation.
  • #11 SLIDE CONTENT: According to the World Bank, evaluation is the process of assessing whether a project is achieving its intended objectives. This may be done periodically by internal managers or by external stakeholders. Evaluation focuses on outcomes and impacts and assesses whether they are contributing to achieving program goals and objectives. TRAINER NOTE: Let participants know that monitoring and evaluation can also be applied to policy implementation but that the focus of this session will be on monitoring and evaluating projects and programs. Make sure that participants understand the distinction between monitoring and evaluation. The difference is somewhat subtle but should become clearer later in this session. Before moving to the next slide, ask participants for their thoughts on why M&E is so important. Why do people conduct M&E? Why do many donors insist on it? What are the benefits?
  • #12 Monitoring data come from your program, “program-based” data. The data are often reported from individual facilities, networks of facilities, district and regional offices, etc. For example, managers often receive routine reports on the number of sputum smears collected and examined for diagnostic and smear conversion tests. They may use this to determine the average number of smears performed per suspect for diagnostic purposes or the average number performed for conversion tests. Evaluation data, on the other hand, are not routinely reported and usually require a special study or survey. With evaluation, we are measuring outcomes at the level of the target population. As we get further into the workshop, we’ll discuss the differences between monitoring and evaluation in greater detail. This workshop and the Compendium we are going to introduce focus on monitoring, because this what program managers are engaged in every day.
  • #15 REVISE SPEAKERS’ NOTES/ADD EXAMPLES. Speaker Notes M&E helps you make informed decisions about your program operations. It helps you make the most effective and efficient use of resources. It helps you determine exactly where your program is right on track and where you need to consider making corrections. And M&E helps you come to objective conclusions regarding the extent to which your program’s impact can be judged a “success”. M&E is indispensable because these tools inform planners, managers, and implementers whether or to what extent the program or project is operating effectively and according to expectations. By keeping track of specific areas of program performance, operational problems can be identified while they can still be corrected and thus ongoing performance can be improved. Meanwhile, managers can also keep track of the extent to which activities are having their desired effects. Results demonstrated through good monitoring and evaluation techniques enable decision makers also to correct strategies or even overcome unanticipated difficulties. In other words, M&E improves the program’s ultimate impact through better information and increased understanding even while activities are in progress. And as results are shared, the ongoing projects of others, and the future design of comparable activities and their implementation, likewise can all be improved. Additional Background An important point here is that the significance and value of M&E is realized only through use of the M&E data. It is not important in and of itself to collect numbers, even the best numbers, nor is it abstractly important to construct the perfect indicators. If data is not reviewed and interpreted and then fed back into decision-making, M&E’s ultimate purpose -- program improvement -- cannot be met. To be good M&E, it must be M&E that is actively used in problem-solving within the ongoing program, and in further steps of decision-making.
  • #16 FILL IN WITH EXAMPLES/SPEAKERS’ NOTES
  • #17 SLIDE CONTENT: There are a lot of good reasons for conducting M&E. While it can be time-consuming and requires resources such as staff and funding, the benefits of M&E far outweigh the costs. Monitoring and evaluation are important for: Determining whether resources are being expended in the manner planned and according to the program budget. Getting frequent feedback about how your project is progressing and making sure that things are on track. Improving the effectiveness of projects by allowing for mid-course corrections if there are aspects that are not having the desired impact. Serving as a basis for sound decisions about whether and how to revise your project. Promoting accountability among project staff. Demonstrating the impact and success of your project. Identifying lessons learned, enabling institutional learning and informing decisions about future programs. According to the World Bank, effective M&E provides answers to the following questions: Are we doing the right things? Are our interventions contributing to the project objectives? Are we doing the right things right? How effective have we been in achieving expected outcomes? How efficient have we been in optimizing resources? Are these results sustainable? Are there better ways of doing the right things? What are the best practices we’ve identified?
  • #18 SLIDE CONTENT: There are a lot of good reasons for conducting M&E. While it can be time-consuming and requires resources such as staff and funding, the benefits of M&E far outweigh the costs. Monitoring and evaluation are important for: Determining whether resources are being expended in the manner planned and according to the program budget. Getting frequent feedback about how your project is progressing and making sure that things are on track. Improving the effectiveness of projects by allowing for mid-course corrections if there are aspects that are not having the desired impact. Serving as a basis for sound decisions about whether and how to revise your project. Promoting accountability among project staff. Demonstrating the impact and success of your project. Identifying lessons learned, enabling institutional learning and informing decisions about future programs. According to the World Bank, effective M&E provides answers to the following questions: Are we doing the right things? Are our interventions contributing to the project objectives? Are we doing the right things right? How effective have we been in achieving expected outcomes? How efficient have we been in optimizing resources? Are these results sustainable? Are there better ways of doing the right things? What are the best practices we’ve identified?