"Assessing Outcomes in CGIAR: Practical Approaches and Methods" training by Burt Perrin for CGIAR Evaluation Community of Practice (ECOP), 2nd annual workshop 2014
Guidance Note on CGIAR Research Programs (CRPS) Commissioned Evaluations . Presentation by Sirkka Immonen to Evaluation Community of Practice participants, 2nd annual ECOP workshop, 2014
Independent evaluation of CGIAR Research Program on Policies, Institutions and Markets (PIM): Brief summary of findings, conclusions and recommendations
Module 6 Implementation: project management and monitoringTogar Simatupang
This document discusses project management, monitoring, and evaluation for smart city projects. It begins with learning objectives which include understanding project management principles, project cycle management, logical framework approaches, and monitoring and evaluation. It then discusses key areas of smart city projects like quality of life, healthcare, education, etc. It covers the project cycle from planning to implementation to evaluation. Key frameworks discussed include logical framework analysis, project cycle management, and active implementation frameworks. The document emphasizes skills needed for developing digital cities like collaboration, change management, and digital literacy.
Monitoring is the continuous collection of data and information on specified indicators to assess the implementation of a development intervention in relation to activity schedules and expenditure of allocated funds, and progress and achievements in relation to its intended outcome.
Evaluation is the periodic assessment of the design implementation, outcome, and impact of a development intervention. It should assess the relevance and achievement of the intended outcome, and implementation performance in terms of effectiveness and efficiency, and the nature, distribution, and sustainability of impact.
The document discusses monitoring and evaluation of education programs for sustainable development. It aims to identify learning processes aligned with ESD and their contributions. Key learning processes include collaboration, engaging stakeholders, and active participation. ESD learning refers to gaining knowledge as well as learning critical thinking and envisioning positive futures. However, data on ESD processes and outcomes is limited. The review recommends improved data collection focusing on experiences rather than literature. More evidence is still needed to fully understand ESD's contributions to sustainable development.
Participatory Monitoring and Evaluation background, concepts and principles, goals of PM&E, the PM&E process, stakeholder analysis, PM&E framework, plan, worksheet, a case study using PM&E
Guidance Note on CGIAR Research Programs (CRPS) Commissioned Evaluations . Presentation by Sirkka Immonen to Evaluation Community of Practice participants, 2nd annual ECOP workshop, 2014
Independent evaluation of CGIAR Research Program on Policies, Institutions and Markets (PIM): Brief summary of findings, conclusions and recommendations
Module 6 Implementation: project management and monitoringTogar Simatupang
This document discusses project management, monitoring, and evaluation for smart city projects. It begins with learning objectives which include understanding project management principles, project cycle management, logical framework approaches, and monitoring and evaluation. It then discusses key areas of smart city projects like quality of life, healthcare, education, etc. It covers the project cycle from planning to implementation to evaluation. Key frameworks discussed include logical framework analysis, project cycle management, and active implementation frameworks. The document emphasizes skills needed for developing digital cities like collaboration, change management, and digital literacy.
Monitoring is the continuous collection of data and information on specified indicators to assess the implementation of a development intervention in relation to activity schedules and expenditure of allocated funds, and progress and achievements in relation to its intended outcome.
Evaluation is the periodic assessment of the design implementation, outcome, and impact of a development intervention. It should assess the relevance and achievement of the intended outcome, and implementation performance in terms of effectiveness and efficiency, and the nature, distribution, and sustainability of impact.
The document discusses monitoring and evaluation of education programs for sustainable development. It aims to identify learning processes aligned with ESD and their contributions. Key learning processes include collaboration, engaging stakeholders, and active participation. ESD learning refers to gaining knowledge as well as learning critical thinking and envisioning positive futures. However, data on ESD processes and outcomes is limited. The review recommends improved data collection focusing on experiences rather than literature. More evidence is still needed to fully understand ESD's contributions to sustainable development.
Participatory Monitoring and Evaluation background, concepts and principles, goals of PM&E, the PM&E process, stakeholder analysis, PM&E framework, plan, worksheet, a case study using PM&E
This document provides an overview of monitoring and evaluation (M&E) processes at Room to Read. It discusses key M&E concepts like indicators, data collection, and the Global Solutions Database. It also outlines Room to Read's approach to M&E, including defining goals and objectives, collecting and analyzing global and country-specific indicators, ensuring data quality, and using M&E data to track progress and improve programs. Examples of indicators for different Room to Read programs like reading rooms and girls' education are also presented.
This document provides an overview of monitoring and evaluation (M&E) for projects. It defines monitoring as the continuous collection and analysis of data on a project's progress, while evaluation assesses a project's effectiveness in achieving its goals. The key differences between M&E are outlined, including that monitoring is ongoing and focuses on activities and outputs, while evaluation occurs periodically and examines outcomes and impacts. M&E frameworks, principles, systems and levels of effort are also described to guide effective project implementation and learning.
Afid project monitoring and evaluation practices and lessonsRosemirta Birungi
The document provides a summary of a monitoring and evaluation review conducted in 2013 on two projects implemented by AFID in Eastern and Central Africa. It summarizes the progress and performance of the projects against targets in areas such as generation of agricultural technologies, capacity building, and enabling policies. It also discusses AFID's experience using M&E tools, the benefits of M&E in improving reporting and project implementation, challenges faced in operationalizing indicators, and recommendations for strengthening M&E in future projects.
A series of modules on project cycle, planning and the logical framework, aimed at team leaders of international NGOs in developing countries.
Part 7 of 11.
There are two handouts to go with this module, Population Indicators, and a Logframe with blanks. http://www.slideshare.net/Makewa/population-indicators-handout and http://www.slideshare.net/Makewa/exercise-watsan-logframe-with-blanks
During this session we will:
*Review importance of monitoring and evaluation
*Share overview of grant model evaluation plan
*Review methodologies used in previous evaluations
*Share plans for future evaluation methodologies
Monitoring and evaluation provide real-time information on project implementation and more in-depth assessments, respectively. Monitoring checks progress toward goals and identifies issues to inform adjustments, while evaluation assesses what worked and didn't work independently. Both are integral to program management. Effective monitoring and evaluation establish what will be monitored and evaluated, responsibilities, methods, resources, and timing of activities to validate the program's logic and encourage improvements.
Monotoring and evaluation principles and theoriescommochally
This document discusses monitoring and evaluation (M&E) capacity in Tanzania. It notes that while M&E is important for improving development outcomes, many countries, including Tanzania, lack necessary M&E capacity at both the individual and institutional levels. Comprehensive training is needed to address gaps in M&E skills. The document outlines the differences between monitoring, which tracks project progress, and evaluation, which assesses outcomes and impacts in more depth. Both M&E are important management tools that provide useful feedback when integrated.
Two Examples of Program Planning, Monitoring and EvaluationMEASURE Evaluation
Presented by Laili Irani, Senior Policy Analyst for the Population Reference Bureau, as part of the Measuring Success Toolkit webinar in September 2012.
This document provides an introduction to monitoring and evaluation for interns. It defines monitoring as the routine collection and analysis of project data to provide information on progress, while evaluation assesses a project's achievements against its objectives and identifies lessons learned. Several tools for monitoring and evaluation are described, including Gantt charts, timelines, and logical frameworks. The presentation emphasizes that monitoring and evaluation are important project management processes that help ensure quality, allow for course corrections, and provide lessons for future projects.
United Way or Erie County - Programs, Program Monitoring and Evaluation, and ...Via Evaluation
Caroline Taggart, Senior Evaluator, was invited by the United Way of Buffalo & Erie County to present at the organization’s Board Leadership Training program. Caroline’s presentation covered the importance and general tenets of Program Monitoring and Evaluation, with an emphasis on questions non-profit organization’s Board Members can ask to encourage their organization’s engagement in these activities to ensure quality program delivery and maximum impact.
This document provides guidance on developing an effective monitoring and evaluation (M&E) plan. It defines monitoring as measuring progress against targets and milestones, while evaluation assesses success in meeting goals and lessons learned. Key elements of an M&E plan include activities, efficiency, effectiveness, impact, outcomes, and sustainability. The document outlines different types of evaluations based on process (internal, external, self) and character (formative, summative, goal-based). It also provides templates for developing an M&E plan, including a logical framework matrix to define objectives, indicators, and assumptions. Regular monitoring and evaluation against indicators is important for accountability and learning.
Training on Monitoring & Evaluation (M&E) of Adaptation and the NAP processNAP Events
Presented by: Timo Leiter & Julia Olivier
3c. Developing (sub)national adaptation M&E systems
Participants will be taken through a short training course on the basic steps of developing a national adaptation M&E system with specific reference to the process to formulate and implement NAPs. The training will be based on the guidebook “Developing national adaptation M&E systems” developed by GIZ in collaboration with the LEG and the Adaptation Committee.
This document outlines the development of a monitoring and evaluation framework for transboundary marine spatial planning in the Baltic Sea region. It will:
1. Review existing MSP evaluation frameworks and literature to draft an initial framework.
2. Collaboratively scrutinize and develop the suggested framework with input from Baltic SCOPE project cases over 18 months to improve relevance and feasibility.
3. Finalize the evaluation and monitoring framework for cross-border MSP in the Baltic Sea Region after incorporating lessons learned and case study feedback.
Difference between monitoring and evaluationDoreen Ty
Monitoring involves tracking project performance and progress toward goals during implementation to ensure accountability. It answers whether things are being done right and allows for timely management decisions. Evaluation assesses efficiency, impact and relevance after completion to judge the overall merits and determine if the right things were done. Both aim to improve projects, but monitoring focuses on day-to-day management during implementation while evaluation provides longer-term perspective at critical points like midway or after completion.
This document outlines a Monitoring and Evaluation Policy developed by Komal Zahra for HAADI. The policy aims to establish a results-focused and accountable approach to monitoring, evaluating and learning from HAADI's compact and threshold programs. It requires the development of detailed M&E plans for each program that identify indicators, data collection methods, evaluation questions and responsibilities. The M&E plans will be used to regularly monitor progress, evaluate impacts, and ensure accountability. The policy also establishes procedures for modifying, approving and reporting on M&E plans over time, as well as conducting data quality reviews and evaluations of the policy itself.
6 M&E - Monitoring and Evaluation of Aid ProjectsTony
A series of course modules on project cycle, planning and the logical framework, aimed at team leaders of international NGOs in developing countries.
This is part 6 of 11, beginning with 2 modules on leadership and conflict resolution, then 9 modules on project cycle management.
This module has 3 handouts and presenter notes as separate documents.
Sample Proposal: http://www.slideshare.net/Makewa/6-watsan-training-sample-proposal-09
Slides as a handout: http://www.slideshare.net/Makewa/6-me-handout
Presenter notes: http://www.slideshare.net/Makewa/6-module-6-presenter-notes
Monitoring and evaluation are important for e-governance projects to track their outputs and outcomes. Monitoring relates to tracking project progress and deliverables against the project plan. Evaluation assesses achievement of objectives and provides recommendations. Outputs are tangible deliverables like processes, systems, and infrastructure. Outcomes are intended results like increased efficiency and quality services. A monitoring and evaluation framework should define indicators to measure outputs and outcomes. This allows evaluating project performance and assessing progress toward business goals.
This document provides a monitoring and evaluation framework for the Economic Development Department of an unnamed city. It outlines the legislative and policy context for monitoring and evaluation in the local government. It describes the methodology used to develop the framework, which included a literature review, reviewing department documents, and consulting with staff. The framework is intended to establish common understanding of key monitoring and evaluation principles and provide the foundation for tracking the performance of the department and its projects in achieving their objectives. It outlines the planning, monitoring, evaluation, reporting, and feedback phases to put the framework into practice.
This document provides an introduction to monitoring and evaluation (M&E) plans. It discusses what an M&E plan is, how it relates to a logic model, and how it can contribute to a program's success. An M&E plan describes a program's approach to implementing M&E activities, including what data will be collected, how and when data collection will occur, and who is responsible. It helps programs measure progress toward objectives and determine if desired results were achieved. The document also provides a template for components to include in an M&E plan and discusses how complexity of M&E plans has increased over time with different requirements from organizations like USAID, CDC, and GAC. It emphasizes involving relevant technical
Community engagement - what constitutes successcontentli
This document discusses evaluating community engagement programs. It explains that evaluation involves systematically collecting information about a program's activities and outcomes to track progress, make judgements, and improve effectiveness. For community engagement specifically, evaluation can determine what worked well or not, if engagement met its objectives, and if it enhanced knowledge and decision-making. The document recommends clarifying a program's logic, outcomes, and purpose of evaluation with stakeholders. It also suggests establishing performance indicators and methods for collecting and analyzing information to both manage programs adaptively and use findings.
This document outlines an agenda for a MEAL workshop on ETH1117 from April 26-29, 2023. The workshop will cover general MEAL concepts like theory of change, logical frameworks, indicators, and measuring success. It will also discuss developing an MEAL plan for ETH1117, data management, accountability, and learning lessons from previous projects. Participants will learn about the differences between monitoring and evaluation and discuss existing MEAL practices, challenges, and expectations for strengthening MEAL systems. The workshop aims to familiarize participants with key MEAL topics to support evidence-based project management, accountability, and continual improvement.
This document provides an overview of monitoring and evaluation (M&E) processes at Room to Read. It discusses key M&E concepts like indicators, data collection, and the Global Solutions Database. It also outlines Room to Read's approach to M&E, including defining goals and objectives, collecting and analyzing global and country-specific indicators, ensuring data quality, and using M&E data to track progress and improve programs. Examples of indicators for different Room to Read programs like reading rooms and girls' education are also presented.
This document provides an overview of monitoring and evaluation (M&E) for projects. It defines monitoring as the continuous collection and analysis of data on a project's progress, while evaluation assesses a project's effectiveness in achieving its goals. The key differences between M&E are outlined, including that monitoring is ongoing and focuses on activities and outputs, while evaluation occurs periodically and examines outcomes and impacts. M&E frameworks, principles, systems and levels of effort are also described to guide effective project implementation and learning.
Afid project monitoring and evaluation practices and lessonsRosemirta Birungi
The document provides a summary of a monitoring and evaluation review conducted in 2013 on two projects implemented by AFID in Eastern and Central Africa. It summarizes the progress and performance of the projects against targets in areas such as generation of agricultural technologies, capacity building, and enabling policies. It also discusses AFID's experience using M&E tools, the benefits of M&E in improving reporting and project implementation, challenges faced in operationalizing indicators, and recommendations for strengthening M&E in future projects.
A series of modules on project cycle, planning and the logical framework, aimed at team leaders of international NGOs in developing countries.
Part 7 of 11.
There are two handouts to go with this module, Population Indicators, and a Logframe with blanks. http://www.slideshare.net/Makewa/population-indicators-handout and http://www.slideshare.net/Makewa/exercise-watsan-logframe-with-blanks
During this session we will:
*Review importance of monitoring and evaluation
*Share overview of grant model evaluation plan
*Review methodologies used in previous evaluations
*Share plans for future evaluation methodologies
Monitoring and evaluation provide real-time information on project implementation and more in-depth assessments, respectively. Monitoring checks progress toward goals and identifies issues to inform adjustments, while evaluation assesses what worked and didn't work independently. Both are integral to program management. Effective monitoring and evaluation establish what will be monitored and evaluated, responsibilities, methods, resources, and timing of activities to validate the program's logic and encourage improvements.
Monotoring and evaluation principles and theoriescommochally
This document discusses monitoring and evaluation (M&E) capacity in Tanzania. It notes that while M&E is important for improving development outcomes, many countries, including Tanzania, lack necessary M&E capacity at both the individual and institutional levels. Comprehensive training is needed to address gaps in M&E skills. The document outlines the differences between monitoring, which tracks project progress, and evaluation, which assesses outcomes and impacts in more depth. Both M&E are important management tools that provide useful feedback when integrated.
Two Examples of Program Planning, Monitoring and EvaluationMEASURE Evaluation
Presented by Laili Irani, Senior Policy Analyst for the Population Reference Bureau, as part of the Measuring Success Toolkit webinar in September 2012.
This document provides an introduction to monitoring and evaluation for interns. It defines monitoring as the routine collection and analysis of project data to provide information on progress, while evaluation assesses a project's achievements against its objectives and identifies lessons learned. Several tools for monitoring and evaluation are described, including Gantt charts, timelines, and logical frameworks. The presentation emphasizes that monitoring and evaluation are important project management processes that help ensure quality, allow for course corrections, and provide lessons for future projects.
United Way or Erie County - Programs, Program Monitoring and Evaluation, and ...Via Evaluation
Caroline Taggart, Senior Evaluator, was invited by the United Way of Buffalo & Erie County to present at the organization’s Board Leadership Training program. Caroline’s presentation covered the importance and general tenets of Program Monitoring and Evaluation, with an emphasis on questions non-profit organization’s Board Members can ask to encourage their organization’s engagement in these activities to ensure quality program delivery and maximum impact.
This document provides guidance on developing an effective monitoring and evaluation (M&E) plan. It defines monitoring as measuring progress against targets and milestones, while evaluation assesses success in meeting goals and lessons learned. Key elements of an M&E plan include activities, efficiency, effectiveness, impact, outcomes, and sustainability. The document outlines different types of evaluations based on process (internal, external, self) and character (formative, summative, goal-based). It also provides templates for developing an M&E plan, including a logical framework matrix to define objectives, indicators, and assumptions. Regular monitoring and evaluation against indicators is important for accountability and learning.
Training on Monitoring & Evaluation (M&E) of Adaptation and the NAP processNAP Events
Presented by: Timo Leiter & Julia Olivier
3c. Developing (sub)national adaptation M&E systems
Participants will be taken through a short training course on the basic steps of developing a national adaptation M&E system with specific reference to the process to formulate and implement NAPs. The training will be based on the guidebook “Developing national adaptation M&E systems” developed by GIZ in collaboration with the LEG and the Adaptation Committee.
This document outlines the development of a monitoring and evaluation framework for transboundary marine spatial planning in the Baltic Sea region. It will:
1. Review existing MSP evaluation frameworks and literature to draft an initial framework.
2. Collaboratively scrutinize and develop the suggested framework with input from Baltic SCOPE project cases over 18 months to improve relevance and feasibility.
3. Finalize the evaluation and monitoring framework for cross-border MSP in the Baltic Sea Region after incorporating lessons learned and case study feedback.
Difference between monitoring and evaluationDoreen Ty
Monitoring involves tracking project performance and progress toward goals during implementation to ensure accountability. It answers whether things are being done right and allows for timely management decisions. Evaluation assesses efficiency, impact and relevance after completion to judge the overall merits and determine if the right things were done. Both aim to improve projects, but monitoring focuses on day-to-day management during implementation while evaluation provides longer-term perspective at critical points like midway or after completion.
This document outlines a Monitoring and Evaluation Policy developed by Komal Zahra for HAADI. The policy aims to establish a results-focused and accountable approach to monitoring, evaluating and learning from HAADI's compact and threshold programs. It requires the development of detailed M&E plans for each program that identify indicators, data collection methods, evaluation questions and responsibilities. The M&E plans will be used to regularly monitor progress, evaluate impacts, and ensure accountability. The policy also establishes procedures for modifying, approving and reporting on M&E plans over time, as well as conducting data quality reviews and evaluations of the policy itself.
6 M&E - Monitoring and Evaluation of Aid ProjectsTony
A series of course modules on project cycle, planning and the logical framework, aimed at team leaders of international NGOs in developing countries.
This is part 6 of 11, beginning with 2 modules on leadership and conflict resolution, then 9 modules on project cycle management.
This module has 3 handouts and presenter notes as separate documents.
Sample Proposal: http://www.slideshare.net/Makewa/6-watsan-training-sample-proposal-09
Slides as a handout: http://www.slideshare.net/Makewa/6-me-handout
Presenter notes: http://www.slideshare.net/Makewa/6-module-6-presenter-notes
Monitoring and evaluation are important for e-governance projects to track their outputs and outcomes. Monitoring relates to tracking project progress and deliverables against the project plan. Evaluation assesses achievement of objectives and provides recommendations. Outputs are tangible deliverables like processes, systems, and infrastructure. Outcomes are intended results like increased efficiency and quality services. A monitoring and evaluation framework should define indicators to measure outputs and outcomes. This allows evaluating project performance and assessing progress toward business goals.
This document provides a monitoring and evaluation framework for the Economic Development Department of an unnamed city. It outlines the legislative and policy context for monitoring and evaluation in the local government. It describes the methodology used to develop the framework, which included a literature review, reviewing department documents, and consulting with staff. The framework is intended to establish common understanding of key monitoring and evaluation principles and provide the foundation for tracking the performance of the department and its projects in achieving their objectives. It outlines the planning, monitoring, evaluation, reporting, and feedback phases to put the framework into practice.
This document provides an introduction to monitoring and evaluation (M&E) plans. It discusses what an M&E plan is, how it relates to a logic model, and how it can contribute to a program's success. An M&E plan describes a program's approach to implementing M&E activities, including what data will be collected, how and when data collection will occur, and who is responsible. It helps programs measure progress toward objectives and determine if desired results were achieved. The document also provides a template for components to include in an M&E plan and discusses how complexity of M&E plans has increased over time with different requirements from organizations like USAID, CDC, and GAC. It emphasizes involving relevant technical
Community engagement - what constitutes successcontentli
This document discusses evaluating community engagement programs. It explains that evaluation involves systematically collecting information about a program's activities and outcomes to track progress, make judgements, and improve effectiveness. For community engagement specifically, evaluation can determine what worked well or not, if engagement met its objectives, and if it enhanced knowledge and decision-making. The document recommends clarifying a program's logic, outcomes, and purpose of evaluation with stakeholders. It also suggests establishing performance indicators and methods for collecting and analyzing information to both manage programs adaptively and use findings.
This document outlines an agenda for a MEAL workshop on ETH1117 from April 26-29, 2023. The workshop will cover general MEAL concepts like theory of change, logical frameworks, indicators, and measuring success. It will also discuss developing an MEAL plan for ETH1117, data management, accountability, and learning lessons from previous projects. Participants will learn about the differences between monitoring and evaluation and discuss existing MEAL practices, challenges, and expectations for strengthening MEAL systems. The workshop aims to familiarize participants with key MEAL topics to support evidence-based project management, accountability, and continual improvement.
Monitoring involves continuous assessment of project implementation to provide feedback and identify successes and problems. It focuses on schedules, inputs, and services. Evaluation assesses outcomes, impacts, effectiveness, and sustainability. The document discusses the importance of monitoring and evaluation for improving decision-making, achieving outcomes, and organizational learning. It provides definitions and comparisons of monitoring and evaluation. Participatory approaches are emphasized to empower stakeholders. Clear objectives and indicators are needed to measure progress.
Monitoring and Evaluation for Project management.Muthuraj K
Monitoring and evaluation (M&E) is a set of techniques used in project management to establish controls and ensure a project stays on track to achieve its objectives. Monitoring involves systematically collecting, analyzing, and using information for management decisions and control. It provides information to identify and solve problems and assess progress. Evaluation determines the effectiveness, efficiency, relevance, impact, and sustainability of a project. Both monitoring and evaluation are important for project management and should be integrated throughout the project cycle.
This presentation has a vivid description of the basics of doing a program evaluation, with detailed explanation of the " Log Frame work " ( LFA) with practical example from the CLICS project. This presentation also includes the CDC framework for evaluation of program.
N.B: Kindly open the ppt in slide share mode to fully use all the animations wheresoever made.
Curriculum monitoring involves periodically assessing curriculum implementation and making adjustments. It determines how well the curriculum is working and informs decisions about retaining, improving, or modifying aspects. The document outlines the definition, rationale, types, roles, process, and similarities and differences between monitoring and evaluation. An effective monitoring system is simple, provides timely feedback, is cost-effective, flexible, accurate, comprehensive, relevant, and leads to learning. It involves clarifying roles, identifying evidence, data collection tools, training monitors, preparing staff, conducting monitoring, analyzing and sharing results, and determining a plan of action.
Workshop: Monitoring, evaluation and impact assessmentWorldFish
The document introduces monitoring and evaluation in results-based management and discusses key concepts like logic models and theories of change. It provides 3 key points:
1) Results-based management focuses on achieving important organizational changes and improvements in performance through defining expected results, monitoring progress, reporting on performance, and learning lessons.
2) Logic models graphically illustrate program components and how activities lead to outputs, outcomes and impact. Theories of change explain the underlying assumptions and causal pathways of change.
3) Evaluations are used to assess what was implemented, the strength of causal models, intended outcomes, and ultimately the impacts of interventions. Different evaluation strategies are suited to simple, complicated and complex interventions.
This document discusses the importance of monitoring and evaluation (M&E) for programs and projects. It defines monitoring as an ongoing process of collecting and analyzing data to track progress and make adjustments, while evaluation assesses relevance, effectiveness, impact and sustainability. The key aspects of building an M&E system are agreeing on outcomes to measure, selecting indicators, gathering baseline data, setting targets, monitoring implementation and results, reporting findings, and sustaining the system long-term. A strong M&E system provides evidence of achievements and challenges, enables learning and improvement, and helps ensure resources are allocated to effective programs.
Monitoring involves systematically collecting project data to track progress, ensure accountability, support decision making, and promote learning. Evaluations assess needs, processes, interim progress, outcomes, and impacts using methods like surveys and interviews. Effective monitoring and evaluation systems include logical frameworks, indicators, data collection plans, and ensuring data quality to measure outputs, outcomes and assess project effectiveness.
valuation is a methodological area that is closely related to, but distinguishable from more traditional social research. Evaluation utilizes many of the same methodologies used in traditional social research, but because evaluation takes place within a political and organizational context, it requires group skills, management ability, political dexterity, sensitivity to multiple stakeholders and other skills that social research in general does not rely on as much.
The document outlines a workshop on monitoring and evaluating social impact in sport for development programs. It discusses the importance of monitoring and evaluation for providing information on program performance, accountability, and guidance for future activities. The workshop covers developing program logic models, tools for data collection such as qualitative and quantitative methods, research ethics, and strategies for reporting evaluation findings.
This document provides an overview of program evaluation, including definitions, what can be evaluated, why programs are evaluated, and different types of evaluation. It defines program evaluation as making judgements about programs based on analysis and information related to issues like relevance, cost-effectiveness, and success for stakeholders. Formative evaluation aims to improve existing programs, while summative evaluation assesses program results. Evaluations provide information and judgements to stakeholders.
This document discusses evaluation principles, processes, components, and strategies for evaluating community health programs. It begins by defining evaluation and explaining that the community nurse evaluates community responses to health programs to measure progress towards goals and objectives. The evaluation process involves assessing implementation, short-term impacts, and long-term outcomes. Key components of evaluation include relevance, progress, cost-efficiency, effectiveness, and outcomes. The document then describes various evaluation strategies like case studies, surveys, experimental design, monitoring, and cost-benefit/cost-effectiveness analyses and how they can be useful for evaluation.
A needs analysis involves comparing current conditions to desired goals to understand performance problems. It can be extensive, using large sample sizes for general understanding, or intensive, using smaller samples for in-depth cause-and-effect analysis. Performing a needs analysis involves gap analysis, identifying priorities, outlining a methodology, gathering and analyzing both quantitative and qualitative data, presenting findings, and making conclusions and recommendations. An example needs assessment addressed gender-based violence in schools in Africa through stakeholder interviews, performances, photo voices, drawings, and documentaries to develop an action plan.
Monitoring and evaluation are important project management tools. Monitoring involves regularly collecting and analyzing information to track progress over time, while evaluation analyzes effectiveness and impact through making judgments about progress. Participatory monitoring and evaluation involves stakeholders jointly monitoring and evaluating activities. The main purposes of monitoring and evaluation are to assess results, improve management, promote learning, understand stakeholder perspectives, and ensure accountability.
Collaborative 2 ingrid margarita and sandraSandra Guevara
This document provides guidance on project evaluation. It discusses what project evaluation is, its importance in project design and implementation, additional benefits like project improvement and capacity building. It outlines the planning, data collection, analysis, and reporting process for evaluations. Key steps include examining issues and objectives, establishing a team, identifying the purpose, focusing on improvement, assessing outcomes and impacts, and creating a report to synthesize findings. The goal is to help determine what is and is not working to improve the project.
Project monitoring and evaluation by Samuel Obino MokayaDiscover JKUAT
This document discusses project monitoring and evaluation. It defines monitoring as assessing project implementation against agreed schedules to identify successes and problems. Evaluation assesses a project's relevance, performance, impact and effectiveness. Several monitoring and evaluation tools are described, including reports, validation, participation and different types of evaluations. Good monitoring and evaluation provides feedback to improve projects and identify issues early. It should establish indicators and collect data through methods like interviews, observation and documentation review.
The document discusses developing a research agenda for impact evaluation of development programs. It proposes that the agenda should:
1) Cover different types and purposes of evaluations, questions addressed, users, and those conducting evaluations.
2) Be developed through consultation with various stakeholders and review of existing documentation and examples.
3) Include different types of research like documenting current practices, trials of methods, and longitudinal studies of impact evaluations.
4) Address important questions like how to involve communities and accommodate different views of evidence, and how to represent complex interventions and identify unintended impacts. Support is needed to develop the agenda through legitimate processes and interdisciplinary cooperation.
Similar to "Assessing Outcomes in CGIAR: Practical Approaches and Methods" (20)
Presented at the 2015 CGIAR Evaluation Community of Practice meeitng. CGIAR is moving towards a coordinated evaluation system to comprehensively cover the programs, insitutions, and activities. The presentation offers examples of decentralized evaluaitons as approached by other agencies, and aspects for CGIAR to consider.
Presentation by Philippe Ellul, Senior Science Officer, Consortium/System Office on the monitoring, evaluation, and impact assessment framework in CGIAR and the role of monitoring and reporting
This document outlines the key functions and purposes of monitoring, evaluation, and impact assessment (M-E-IA) from an evaluation perspective. It discusses evaluation as an in-depth assessment of ongoing programs and institutions that is systematic, evidence-based, and independent. Monitoring is described as the regular observation and analysis of program implementation and results for management and reporting purposes. Impact assessment is defined as examining long-term outcomes and impacts through specialized research methods. The document also provides an overview of M-E-IA bodies and responsibilities within the CGIAR and illustrates how evaluation supports both accountability and learning for future effectiveness.
The Independent Evaluation Arrangement (IEA) completed evaluations of several CGIAR Research Programs (CRPs) and centers in 2013-2015. For the five completed CRP evaluations, management accepted most recommendations fully or partially. Both management and consortium responses committed to reflecting the recommendations in the second phase of the CRPs. The IEA is finalizing another five CRP evaluations and has provided quality assurance for several others. It is synthesizing the first five evaluations to identify patterns and lessons. The IEA is also reflecting on strengthening its accountability and learning framework, decentralized evaluation options, and evaluating the Sustainable Development Goals. Its 2016 work plan includes thematic evaluations, another CRP evaluation, and synthesizing the 15 CR
UN WOD 2024 will take us on a journey of discovery through the ocean's vastness, tapping into the wisdom and expertise of global policy-makers, scientists, managers, thought leaders, and artists to awaken new depths of understanding, compassion, collaboration and commitment for the ocean and all it sustains. The program will expand our perspectives and appreciation for our blue planet, build new foundations for our relationship to the ocean, and ignite a wave of action toward necessary change.
Preliminary findings _OECD field visits to ten regions in the TSI EU mining r...OECDregions
Preliminary findings from OECD field visits for the project: Enhancing EU Mining Regional Ecosystems to Support the Green Transition and Secure Mineral Raw Materials Supply.
Working with data is a challenge for many organizations. Nonprofits in particular may need to collect and analyze sensitive, incomplete, and/or biased historical data about people. In this talk, Dr. Cori Faklaris of UNC Charlotte provides an overview of current AI capabilities and weaknesses to consider when integrating current AI technologies into the data workflow. The talk is organized around three takeaways: (1) For better or sometimes worse, AI provides you with “infinite interns.” (2) Give people permission & guardrails to learn what works with these “interns” and what doesn’t. (3) Create a roadmap for adding in more AI to assist nonprofit work, along with strategies for bias mitigation.
Jennifer Schaus and Associates hosts a complimentary webinar series on The FAR in 2024. Join the webinars on Wednesdays and Fridays at noon, eastern.
Recordings are on YouTube and the company website.
https://www.youtube.com/@jenniferschaus/videos
Food safety, prepare for the unexpected - So what can be done in order to be ready to address food safety, food Consumers, food producers and manufacturers, food transporters, food businesses, food retailers can ...
This report explores the significance of border towns and spaces for strengthening responses to young people on the move. In particular it explores the linkages of young people to local service centres with the aim of further developing service, protection, and support strategies for migrant children in border areas across the region. The report is based on a small-scale fieldwork study in the border towns of Chipata and Katete in Zambia conducted in July 2023. Border towns and spaces provide a rich source of information about issues related to the informal or irregular movement of young people across borders, including smuggling and trafficking. They can help build a picture of the nature and scope of the type of movement young migrants undertake and also the forms of protection available to them. Border towns and spaces also provide a lens through which we can better understand the vulnerabilities of young people on the move and, critically, the strategies they use to navigate challenges and access support.
The findings in this report highlight some of the key factors shaping the experiences and vulnerabilities of young people on the move – particularly their proximity to border spaces and how this affects the risks that they face. The report describes strategies that young people on the move employ to remain below the radar of visibility to state and non-state actors due to fear of arrest, detention, and deportation while also trying to keep themselves safe and access support in border towns. These strategies of (in)visibility provide a way to protect themselves yet at the same time also heighten some of the risks young people face as their vulnerabilities are not always recognised by those who could offer support.
In this report we show that the realities and challenges of life and migration in this region and in Zambia need to be better understood for support to be strengthened and tuned to meet the specific needs of young people on the move. This includes understanding the role of state and non-state stakeholders, the impact of laws and policies and, critically, the experiences of the young people themselves. We provide recommendations for immediate action, recommendations for programming to support young people on the move in the two towns that would reduce risk for young people in this area, and recommendations for longer term policy advocacy.
Jennifer Schaus and Associates hosts a complimentary webinar series on The FAR in 2024. Join the webinars on Wednesdays and Fridays at noon, eastern.
Recordings are on YouTube and the company website.
https://www.youtube.com/@jenniferschaus/videos
AHMR is an interdisciplinary peer-reviewed online journal created to encourage and facilitate the study of all aspects (socio-economic, political, legislative and developmental) of Human Mobility in Africa. Through the publication of original research, policy discussions and evidence research papers AHMR provides a comprehensive forum devoted exclusively to the analysis of contemporaneous trends, migration patterns and some of the most important migration-related issues.
United Nations World Oceans Day 2024; June 8th " Awaken new dephts".Christina Parmionova
The program will expand our perspectives and appreciation for our blue planet, build new foundations for our relationship to the ocean, and ignite a wave of action toward necessary change.
United Nations World Oceans Day 2024; June 8th " Awaken new dephts".
"Assessing Outcomes in CGIAR: Practical Approaches and Methods"
1. Evaluation of outcomes of CGIAR’s CRPs
ECoP training session
25-26 September 2014
Burt Perrin La Masque
Burt@BurtPerrin.com 30770 Vissec
FRANCE
+33 4 67 81 50 11 1
2. Purpose of the training session
Consider approaches to the evaluation of outcomes
Complex programmes/initiatives
Focus on the CRPs
Outcomes of the session
Better understanding: what’s involved in evaluation of outcomes in complex environ.
Appreciation of challenges – and opportunities
Ideas that you can use
2
3. Topics to explore
How to plan evaluations
Complexity: what it is, implications for evaluation of research, CRPs
Evaluation vs. other related activities
Some tools for evaluation planning (evaluability assessment, TOC, outcome trajectories)
Focus on evaluation use
Evaluation designs and methods
Analysis and interpretation
What this means for evaluation of outcomes of CRPs 3
5. Why do evaluation?
Raison d’être of evaluation
Social betterment
Sensemaking
More generally, rationale for evaluation
To be used!
Improved policies, programmes, projects, services, thinking
5
6. Evaluation – some key aspects
Systematic, data based, “objective”
Evidence can come from multiples sources
Can consider any aspect of a strategy, policy, programme, project
Major focus on outcomes that follow from the intervention (i.e. attribution, cause)
E - valua - tion
6
7. Different types of evaluation
Ex-ante vs. ex-post
Process vs. outcome
Formative vs. summative
Descriptive vs. judgemental
Accountability vs. learning (vs. advocacy vs. pro-forma)
Short-term actions vs. long-term thinking
Etc.
7
8. Maximising evaluation value & use
The right questions!
Outcome focus
Emergent – not restricted to pre- determined objectives/indicators
Respects context, identifies how it interacts with what is done
Identifies alignment of activities/projects/programmes with strategy/goal
Assesses results orientation as well as actual results achieved 8
9. What is “complexity”?
Emergent vs. predetermined outcomes
Feedback loops
Indirect, non linear trajectories; tipping points
Unpredictability, random events
Multiple components: partners, levels, causal package (complicated)
(But: try to explain complex situations as simply as possible!)
9
10. Nature of intervention and logic chain (e.g. Rogers)
Simple
E.g. following a recipe
Linear cause-and effect chain
Complicated
E.g. sending a rocket to the moon
Multiple factors happening simultaneously
Complex
E.g. raising a child
Recursive (feedback loops), emergent outcomes that can’t be identified in advance
Tipping points
10
11. Some characteristics of non- linear change (complexity science)
Cause-effect distance (outcome trajectory): long (or short) in time
Depends upon a large number of intervening variables
Usually several causes for any effect
Change not proportional, incremental; qualitative leaps and bounds
Sometimes initial ‘negative’ effects (e.g. the J-curve) – implications for evaluation?
Feedback loops
11
12. Werner Herzog:
1. “Man is a god when he dreams, but a beggar when he reflects.”
2. “Facts do not constitute the truth. There is a deeper stratum.”
Agree or not?
Implications for evaluation?
12
13. Future orientation - Dilemma
“The greatest dilemma of mankind is that all knowledge is about past events and all decisions about the future.
The objective of this planning, long-term and imperfect as it may be, is to make reasonably sure that, in the future, we may end up approximately right instead of exactly wrong.”
13
16. Evaluation vs. Research
Research
Primary objective: long-term knowledge generation (no single study rarely sufficient)
Theory creation/testing/revision
Evidence needs: the more the better
Evaluation
Reference to a particular type of situation
Practical application/utilisation in some form an essential component
Evidence needs: as little as necessary to support meaningful use (level of confidence required)
But: evaluation makes use of research methodologies – from diverse disciplines
16
17. Monitoring – the concept and common definitions
Tracking progress in accordance with previously identified objectives, indicators, or targets (plan vs. reality)
RBM, performance measurement, performance indicators …
En français: “suivi” vs. “contrôle”
Some other uses of the term
Any ongoing activity involving data collection and performance (usually internal, sometimes seen as self evaluation)
17
18. Monitoring and Evaluation
Monitoring
Periodic, using data routinely gathered or readily obtainable, generally internal
Assumes appropriateness of programme, activities, objectives, indicators
Tracks progress against small number of targets/ indicators (one at a time)
Usually quantitative
Cannot indicate causality
Difficult to use for impact assessment
Evaluation
Generally episodic, often external
Can question the rationale and relevance of the program and its objectives
Can identify unintended as well as planned impacts and effects
Can address “how” and “why” questions
Can provide guidance for future directions
Can use data from different sources and from a wide variety of methods18
19. How Monitoring and Evaluation can be complementary
Ongoing monitoring
Can identify questions, issues for (in-depth) evaluation
Provide data for evaluation
Nature of the intervention
Evaluation
Can identify what should be monitored in the future
19
20. 20
Monitoring, Evaluation and Impact Evaluation
Inputs
Outputs
Outcomes
Impact
Investments (resources, staff…) and activities
Products
Intermediate achievements of
the project
Long-term, sustainable changes
Monitoring: what has been invested, done and produced, and how are we progressing towards the achievement of the objectives?
Evaluation: what occurred and what has been achieved as a result of the project?
Impact evalua- tion: what long- term, sustainable changes have been produced (e.g. poverty reduction)?
21. Evaluation vs. audit
21
Audit
Compliance focus
Rules and procedures
Divergence: planned vs. actual
Main attention to process
Identify transgressions
Standardised approach
Outside scrutiny
Evaluation
Outcome orientation, context and rationale, attribution
Constructive guidance
“Why” and “how” as well as “what” considerations
Unintended as well as planned impacts and effects
Wide range of potential approaches and methods
23. What is an evaluability assessment (EA)?
Essentially, evidence-based plan for evaluation
What aspects of the programme are evaluable – and when?
E.g. coherent programme logic, data availability, conducive environment …
What the programme needs to do
Expected outcome trajectories
TOC that includes above considerations
23
24. Elements in an EA
(Involve stakeholders – build buy-in)
Review/clarify programme intent; identify varying perspectives
Help articulate the TOC; identify the soundness of the programme logic, including gaps
Identify evaluation priorities and questions
Identify evaluation implications for the program
Explore feasibility of addressing potential questions (data availability, cost, other considerations)
Explore alternative evaluation designs 24
25. Outcome focus: what is this?
Change that follows from the intervention in some way
OECD/DAC: The likely or achieved short- term and medium-term effects of an intervention’s outputs
Can/should consider other factors/ interventions
Consider the “whys”
25
26. Outcomes (vs. process, impact)
Level
What is this
Example: farmer training
Process
Activities, outputs
What was done
E.g. programme set up, implemented (as expected, or differently), needs assessment carried out, curriculum developed, outreach, training delivered
Outcomes
Changes following from the programme
E.g. learning/expertise, confidence, planting practices, increased yields, new markets, increased revenues
Impact
Long-term effects following from intervention
Invariably in combination
Raison d’être
Sustainability of short-term gains, Poverty, hunger, malnutrition reduction; natural resources sustainability
26
27. Questions for evaluation
Start with the questions
Choice of methods to follow
How to identify questions
Who can use evaluation information?
What information can be used? How?
Different stakeholders – different questions
Consider responses to hypothetical findings
Develop the theory of change
How many questions?
27
29. Three key evaluation questions
What’s happening?
(planned and unplanned, little or big at any level)
Why?
So what?
29
30. UNEG’s three evaluation questions
Are we doing the right thing?
Are we doing it right?
Are there better ways of achieving the results?
30
31. OECD/DAC Evaluation Criteria
Relevance
Effectiveness
Efficiency
Impact
Sustainability
31
• Evaluation criteria vs. evaluation questions
• Breadth vs. focus
• Intelligent vs. mechanical use
32. Some uses for evaluation
Programme improvement
Identify new policies, programme directions, strategies
Programme formation
Decision making at all levels
Accountability
Learning
Identification of needs
Advocacy
Instilling evaluative/questioning culture
32
33. Some priorities for an EA
Focus on outcomes
Identify expected/potential outcomes
Be open to unintended outcomes
Outcome trajectories
Evaluation priorities and questions
Surface and question assumptions
Implicit and explicit
Be realistic (priorities, expectations of the programme and the evaluation)
Don’t set up the programme for failure
33
35. Theory of Change
Why a useful tool for planning an evaluation
Alternative terms (intervention logic, logic model, results chain …)
Linear vs. models that reflect complexity
35
39. Generic logic model – in context
InputsActivitiesIntermediate results (1) Intermediate results (2) ImpactsOther resultsOther resultsOther resultsOther resultsOther factorsOther factorsOther factorsNeedsEnvironment et contextKnowledgeOutputsOther factorsOther interventionsOther interventions
39
40. IMPACT ON CHILDREN
IPEC/partner Initiatives
Targeted Interventions
Capacity building
Children
Families and communities
The enabling environment
(Institutions, policies & programmes, legislation, awareness, mobilization…)
40
41. 41
Outline of factors affecting maternal and child health and nutrition
Fig. from Victora, Cesar G, Robert E Black, J Ties Boerma, Jennifer Bryce. (2010). Measuring impact in the Millennium Development Goal era and beyond: a new approach to large-scale effectiveness. The Lancet. Published Online July 9, 2010
45. Alternative models of causality (All recognised in the physical and social sciences)
Successionist (factual) causality
Counterfactual logic
All but one possible explanation ruled out
Generative (physical) causality
Focus on underlying processes, the “signature”
Simultaneous or alternative causal strands
“INUS” conditions: insufficient but necessary, sufficient but unnecessary
Non linear (e.g. “tipping point”) causality
45
46. Some considerations in choice of design (and methods)
Addresses, somehow, priority questions
Simplest approach – at needed confidence
Internal/external validity
Face validity, construct validity
Gets at “the whys” as well as “the whats”
Engages stakeholders, partners
Practicality (resources, time, data …)
46
47. Determining attribution – some alternatives
Experimental/quasi-experimental designs (counterfactual, randomisation)
Eliminate rival plausible hypotheses
Generative (physical) causality, INUS, non linear (“tipping point”)
Theory of change approach
“Reasonable attribution”
“Contribution” vs. “cause”
47
48. Eliminate rival plausible hypotheses (Donald T. Campbell)
Identify plausible alternative explanations
Plausible to multiple stakeholders
Anticipate possible questions of sceptics
Consider threats to both internal and to external validity
Use the simplest means possible to rule out likelihood of alternative explanations
48
49. Contribution Analysis (Mayne: Using performance measures sensibly)
1.
Develop the results chain
2.
Assess the existing evidence on results
3.
Assess the alternative explanations
4.
Assemble the performance story
5.
Seek out additional evidence
6.
Revise and strengthen the performance story
49
50. Further considerations for meaningful outcome evaluation
Need information about inputs and activities as well as about outcomes
Check, don’t assume that what is mandated in (Western) capitals is what actually takes place sur le terrain
Check: are data sources really accurate?
Dealing with responsiveness – a problem or a strength?
(Internal vs. external validity)
50
51. Some alternative approaches
Theory based
Realist evaluation
Most Significant Change, Success Case Method, Appreciative Inquiry
Participative
Outcome mapping/harvesting
Anthropological
Etc. etc. etc.
51
52. To bear in mind
“For every complex question, there is a simple answer – and it is wrong.” – H.L. Mencken
“One cannot succeed on visible figures alone… The most important figures that one needs for management are unknown or unknowable.” – W. Edward Deming
“Not every that can be counted counts, and not everything that counts can be counted.” – Einstein
52
53. And …
“Assessment of many of the most common activities in government requires soft judgment… Measurement often misses the point, sometimes causing awful distortions.” – Mintzberg
“Better an approximate answer to the right question than an exact answer to the wrong question that can always be made precise.” – Tukey
53
54. Methods for data gathering: possible options
Surveys
Panel studies/longitudinal
(experimental/quasi-experimental)
Interviews, group interviews
Documentation, analysis of records
Observation (quantitative, qualitative)
Community members as researchers
Alternative methods
Multiple methods
54
55. Making evaluation useful - 1
Be strategic
E.g. start with the big picture – identify questions arising
Focus on priority questions and information requirements
Consider needs, preferences, of key evaluation users
Don’t be limited to stated/intended effects
Be realistic, don’t set programs up for failure
Don’t try to do everything in one evaluation
55
56. Making evaluation useful - 2
Primary focus: how evaluation can be relevant and useful
Bear the beneficiaries in mind
Take into account diversity, including differing world views, logics, and values
Be an (appropriate) advocate
Don’t be too broad
Don’t be too narrow
56
57. How else can one practice evaluation so that it is useful?
Follow the Golden Rule
“There are no golden rules.” (European Commission)
Art as much as science
Be future oriented – focused on use
Involve stakeholders
Use multiple and complementary methods, qualitative and quantitative
Recognize differences between monitoring and evaluation
57
58. Conclusion
Primary focus: helping to make a difference (think strategically!)
Requires focus of some form on outcomes
What happens when, why, and so what
Use evaluation to embrace complexity – as simply as possible
Questions are more important than the “right” method
Thank you / grazie / merci / gracias
Burt Perrin
Burt@BurtPerrin.com 58