This document provides an overview of how small to medium foundations plan for and approach monitoring, learning, and evaluation (MEL). It finds that foundations are increasingly dedicating more resources to evaluation, with typical spending ranging from 0.7-7.5% of program budgets. Foundations also typically have 3-4 full-time staff focused on MEL functions. The document outlines best practices foundations have identified, including establishing clear MEL principles, basing evaluations on testing strategies/hypotheses, planning evaluations early, streamlining indicators, using third-party evaluators, reviewing staff skills, and ensuring findings are used for learning and adaptation.
The document outlines the steps for conducting an education needs assessment. It discusses that an education needs assessment establishes the need for a particular project by examining audience interest, knowledge, and environmental issues. The needs assessment process involves planning, defining participants, designing data collection strategies, gathering and analyzing data, and reporting results to identify priority needs. Key steps include using the TOP model to guide data collection, determining appropriate sampling, designing instruments, and analyzing both qualitative and quantitative data collected. The overall goal is to systematically identify any gaps in existing education services to effectively design new education projects.
The document discusses the importance of conducting a needs assessment for an educational project. It defines a needs assessment as systematically examining audience interest, knowledge, and environmental issues to establish the need for a project. A needs assessment identifies gaps between current and desired conditions, helps define goals and stakeholders, and ensures goals align with strategic plans. It also outlines a 13-step process for planning and conducting a needs assessment, including gathering data, analyzing results, prioritizing needs, and creating a report.
Building Capacity to Measure, Analyze and Evaluate Government PerformanceWashington Evaluators
The document discusses building capacity for government performance evaluation. It argues that separating program evaluation from performance measurement has been detrimental by reducing organizational learning. Both functions should be viewed as part of a strategic, holistic evaluation approach. This would involve developing evaluation competencies across the organization, including for managers and executives. The American Evaluation Association's "Evaluation Roadmap for More Effective Government" provides guidance on building evaluation capacity through strategic design and use of evaluation to promote organizational learning.
This document summarizes a presentation about building sustainable accountability into strategic routine immunization (RI) planning in Nigeria. It discusses the development of an Accountability Framework for Routine Immunization in Nigeria (AFRIN) to improve goal-directed spending, performance monitoring, and consequences for missed targets. The framework maps the immunization system process and defines roles and responsibilities. It also discusses developing appropriate performance and output indicators, potential incentives and sanctions, the importance of monitoring, and new roles for communities and donors in accountability.
This document discusses strategic planning and organizing patient care. It defines key terms like mission, philosophy and values. It outlines the strategic planning process, including performing environmental assessments, stakeholder analysis, determining goals and objectives. It also discusses organizational structures and factors that influence organizational structure selection like environmental changes, programs/services, leadership and technology.
This document provides an overview of policy analysis, outlining several key points:
- It defines policy analysis and describes it as a process used to determine what a policy will or has achieved. Approaches include descriptive analysis of existing policies and prescriptive analysis to formulate new policies.
- The importance of policy analysis is highlighted, such as assessing situations, seeking acceptance, providing opportunities for modification, and facilitating evidence-based decision making.
- Several models of policy analysis are described, including process, substantive, eightfold path, logical-positivist, and participatory policy analysis.
- The use of indicators and outcomes to evaluate policies is discussed, noting they can measure results at the population, agency
This document outlines the presentation on evaluating a national health programme. It discusses key topics like monitoring versus evaluation, the history and purpose of evaluation, different types of evaluation including formative, summative and participatory evaluation. The document details the evaluation process including planning evaluations, gathering baseline data, implementing evaluations and using evaluation results. It also covers standards for effective evaluation including ensuring the utility, feasibility, propriety and accuracy of evaluations. The overall summary is that the document provides an overview of best practices for conducting program evaluations of national health initiatives.
The document outlines the steps for conducting an education needs assessment. It discusses that an education needs assessment establishes the need for a particular project by examining audience interest, knowledge, and environmental issues. The needs assessment process involves planning, defining participants, designing data collection strategies, gathering and analyzing data, and reporting results to identify priority needs. Key steps include using the TOP model to guide data collection, determining appropriate sampling, designing instruments, and analyzing both qualitative and quantitative data collected. The overall goal is to systematically identify any gaps in existing education services to effectively design new education projects.
The document discusses the importance of conducting a needs assessment for an educational project. It defines a needs assessment as systematically examining audience interest, knowledge, and environmental issues to establish the need for a project. A needs assessment identifies gaps between current and desired conditions, helps define goals and stakeholders, and ensures goals align with strategic plans. It also outlines a 13-step process for planning and conducting a needs assessment, including gathering data, analyzing results, prioritizing needs, and creating a report.
Building Capacity to Measure, Analyze and Evaluate Government PerformanceWashington Evaluators
The document discusses building capacity for government performance evaluation. It argues that separating program evaluation from performance measurement has been detrimental by reducing organizational learning. Both functions should be viewed as part of a strategic, holistic evaluation approach. This would involve developing evaluation competencies across the organization, including for managers and executives. The American Evaluation Association's "Evaluation Roadmap for More Effective Government" provides guidance on building evaluation capacity through strategic design and use of evaluation to promote organizational learning.
This document summarizes a presentation about building sustainable accountability into strategic routine immunization (RI) planning in Nigeria. It discusses the development of an Accountability Framework for Routine Immunization in Nigeria (AFRIN) to improve goal-directed spending, performance monitoring, and consequences for missed targets. The framework maps the immunization system process and defines roles and responsibilities. It also discusses developing appropriate performance and output indicators, potential incentives and sanctions, the importance of monitoring, and new roles for communities and donors in accountability.
This document discusses strategic planning and organizing patient care. It defines key terms like mission, philosophy and values. It outlines the strategic planning process, including performing environmental assessments, stakeholder analysis, determining goals and objectives. It also discusses organizational structures and factors that influence organizational structure selection like environmental changes, programs/services, leadership and technology.
This document provides an overview of policy analysis, outlining several key points:
- It defines policy analysis and describes it as a process used to determine what a policy will or has achieved. Approaches include descriptive analysis of existing policies and prescriptive analysis to formulate new policies.
- The importance of policy analysis is highlighted, such as assessing situations, seeking acceptance, providing opportunities for modification, and facilitating evidence-based decision making.
- Several models of policy analysis are described, including process, substantive, eightfold path, logical-positivist, and participatory policy analysis.
- The use of indicators and outcomes to evaluate policies is discussed, noting they can measure results at the population, agency
This document outlines the presentation on evaluating a national health programme. It discusses key topics like monitoring versus evaluation, the history and purpose of evaluation, different types of evaluation including formative, summative and participatory evaluation. The document details the evaluation process including planning evaluations, gathering baseline data, implementing evaluations and using evaluation results. It also covers standards for effective evaluation including ensuring the utility, feasibility, propriety and accuracy of evaluations. The overall summary is that the document provides an overview of best practices for conducting program evaluations of national health initiatives.
This document discusses developing logic models to focus program evaluations. It defines logic models and their components, and provides an example logic model for an education program to prevent HIV infection. Logic models describe the resources, activities, outputs, and short- and long-term outcomes of a program, helping evaluators design focused evaluation questions. The document emphasizes engaging stakeholders in developing the logic model and determining the evaluation's purpose and questions.
This gives the information about programme evaluation, planning of evaluation, requirement and purpose of evaluation, steps involved in evaluation, Uses of evaluation, Stakeholder and their role in evaluation, finding and analysing the result of evaluation, Standards of effective evaluation, utilization of evaluation.
The document provides an overview of monitoring and evaluation methods for programs. It discusses key concepts like monitoring, evaluation, attributes of each, and who conducts them. The five phases of evaluation are outlined: planning, method selection, data collection and analysis, reporting, and implementing recommendations. Specific monitoring and evaluation tools are also described. The overall summary is:
Monitoring and evaluation follow a five phase process including planning, method selection, data collection and analysis, reporting, and implementing recommendations to improve programs. Key concepts like monitoring, evaluation, attributes of each, tools used, and who conducts them are outlined.
BEST PRACTICE: Identification, Documentation, and Confirmationzorengubalane
This material presents the process and basic guidelines in the identification, documentation, and confirmation of best practice as introduced by SEDIP.
This document provides guidance on conducting situational analyses and setting program priorities for University of Wisconsin Extension offices. It discusses engaging community stakeholders throughout the situational analysis to build understanding and ownership. Case examples and tools are provided to aid in gathering and analyzing data on community needs, assets, and concerns. The priority setting process should consider available resources and involve the county oversight committee, as required by law, to identify priorities the Extension office will address. Communicating results builds further involvement.
Performance Assessment of Agricultural Research Organisation Priority Setting...iosrjce
IOSR Journal of Business and Management (IOSR-JBM) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of business and managemant and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications inbusiness and management. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
The document discusses program evaluation approaches for non-profits with limited budgets and timeframes. It outlines steps in the evaluation process including setting goals, developing a logic model, collecting and analyzing data, and using findings. Evaluations aim to set direction, expand impact, and ensure accountability. While limitations exist, evaluations seek to understand programs from stakeholders' perspectives in a credible, systematic way. The best uses of data are to inform future programming and demonstrate performance.
Library Strategy: Models and MeasurementStephen Town
This document provides models and frameworks for developing library strategies and measuring their impact. It discusses:
1. Definitions of strategy, characteristics of effective strategies, and common strategy frameworks that include analysis, choice, and implementation phases.
2. Examples of strategy documents from the Universities of York and Lund that focus on excellence, internationalization, and quality.
3. Models for analyzing the internal and external environment, including PEST, SWOT, scenarios, and portfolio analyses to inform strategic choices.
4. Approaches for implementing strategies through defined programs and projects and measuring success through critical success factors, balanced scorecards, and assessing value beyond traditional metrics.
This document discusses effective assessment practices for institutions of higher education. It outlines how assessment can benefit institutions by informing decisions about curriculum, programs, policies and student needs. The importance of assessment is that it allows institutions to measure goals and learning outcomes, improve student learning and teaching, and gauge institutional effectiveness. The document provides examples of assessment practices from Northeastern Illinois University and recommends strategies for developing institution-wide and program-level assessment systems, including establishing goals, collecting both direct and indirect measures of student learning, and using assessment data for continuous improvement.
Monitoring and Evaluation of Health ServicesNayyar Kazmi
This document provides an overview of monitoring and evaluation (M&E) of health services. It discusses the key differences between monitoring and evaluation, and explains that M&E is important to assess whether health programs and services are achieving their goals and objectives. The document also outlines the main components and steps involved in conducting evaluations, including developing indicators, collecting and analyzing data, reporting findings, and implementing recommendations.
Effectiveness is often referred to as doing the right thing, while efficiency is doing things right. Effectiveness is an external measure of process output or quality.
Monotoring and evaluation principles and theoriescommochally
This document discusses monitoring and evaluation (M&E) capacity in Tanzania. It notes that while M&E is important for improving development outcomes, many countries, including Tanzania, lack necessary M&E capacity at both the individual and institutional levels. Comprehensive training is needed to address gaps in M&E skills. The document outlines the differences between monitoring, which tracks project progress, and evaluation, which assesses outcomes and impacts in more depth. Both M&E are important management tools that provide useful feedback when integrated.
The document provides guidance for trainers on conducting a workshop to teach participatory monitoring and evaluation techniques to project staff, with the goal of enabling staff to help communities set up their own participatory monitoring and evaluation systems to assess project activities and impacts. It outlines pedagogical approaches, content, and logistical details to structure the training sessions, with a focus on practical implementation of participatory monitoring and evaluation at the local level.
The document provides a recommendation for implementing the LA County Strategic Plan for Economic Development from 2016-2020. It proposes that LAEDC create two new committees focused on liveability and innovation, in addition to leveraging existing committees. It also recommends forming a steering committee of executive leaders to coordinate efforts. LAEDC should serve as a facilitator, assisting committees and preventing duplicative work. Metrics were also reviewed and over 90 were identified to measure progress across the seven goals of the strategic plan. The report details tools to engage stakeholders and keep them involved in the implementation process.
Principles, Steps of programme planning, evaluation and monitoring of program...GBPUA&T, Pantnagar
The document discusses key aspects of developing effective agricultural extension programs, including collecting relevant data, analyzing the current situation, identifying problems, setting objectives and goals, developing a plan of work and calendar, implementing and monitoring the plan, evaluating progress, and revising the program as needed. It emphasizes establishing significant objectives based on farmers' needs that can be realistically achieved given available resources and time, and involving local stakeholders throughout the entire process from planning to implementation to evaluation.
The document summarizes recommendations for implementing LA County's Strategic Economic Development Plan by engaging stakeholders. It recommends that LAEDC:
1) Serve as a facilitator between champion organizations and stakeholders, provide resources and expertise, and collect and report on progress.
2) Use a website, events, marketing tools, and surveys to raise awareness of the plan and engage stakeholders over 3 years, with kickoff and progress report events.
3) Monitor key indicators across 7 goals related to education, industry, innovation, business environment, infrastructure, trade, and livability.
The document discusses monitoring and evaluation of education programs for sustainable development. It aims to identify learning processes aligned with ESD and their contributions. Key learning processes include collaboration, engaging stakeholders, and active participation. ESD learning refers to gaining knowledge as well as learning critical thinking and envisioning positive futures. However, data on ESD processes and outcomes is limited. The review recommends improved data collection focusing on experiences rather than literature. More evidence is still needed to fully understand ESD's contributions to sustainable development.
Collaborative 2 ingrid margarita and sandraSandra Guevara
This document provides guidance on project evaluation. It discusses what project evaluation is, its importance in project design and implementation, additional benefits like project improvement and capacity building. It outlines the planning, data collection, analysis, and reporting process for evaluations. Key steps include examining issues and objectives, establishing a team, identifying the purpose, focusing on improvement, assessing outcomes and impacts, and creating a report to synthesize findings. The goal is to help determine what is and is not working to improve the project.
What is program evaluation lecture 100207 [compatibility mode]Jennifer Morrow
The document discusses what program evaluation is, including defining it as the systematic collection of information about program activities, characteristics, and outcomes to improve effectiveness and inform decision making. It also outlines the types and purposes of evaluation, how to prepare for and conduct an evaluation by developing a logic model and methodology, and important considerations around data collection, analysis, and ethics.
The document discusses establishing evaluation criteria and methods for mentoring programs. It recommends developing plans to measure both program processes and expected outcomes. For outcomes, programs should specify expected impacts, select instruments to measure them, and implement an evaluation design. The final stage involves refining the program based on findings and disseminating regular reports to stakeholders. Conducting rigorous evaluations requires expertise and can cost $5,000-$10,000 but is important for accountability and demonstrating a program's effectiveness.
El documento describe un cielo encendido con alas de ángeles y marejadas de fuego, proporcionando aire fresco para el alma. Contiene una danza lenta y una descripción etérnica que será sofocada por la luna. El documento incluye fotos y texto en español y hñäñho con créditos al autor y traductor.
This document discusses developing logic models to focus program evaluations. It defines logic models and their components, and provides an example logic model for an education program to prevent HIV infection. Logic models describe the resources, activities, outputs, and short- and long-term outcomes of a program, helping evaluators design focused evaluation questions. The document emphasizes engaging stakeholders in developing the logic model and determining the evaluation's purpose and questions.
This gives the information about programme evaluation, planning of evaluation, requirement and purpose of evaluation, steps involved in evaluation, Uses of evaluation, Stakeholder and their role in evaluation, finding and analysing the result of evaluation, Standards of effective evaluation, utilization of evaluation.
The document provides an overview of monitoring and evaluation methods for programs. It discusses key concepts like monitoring, evaluation, attributes of each, and who conducts them. The five phases of evaluation are outlined: planning, method selection, data collection and analysis, reporting, and implementing recommendations. Specific monitoring and evaluation tools are also described. The overall summary is:
Monitoring and evaluation follow a five phase process including planning, method selection, data collection and analysis, reporting, and implementing recommendations to improve programs. Key concepts like monitoring, evaluation, attributes of each, tools used, and who conducts them are outlined.
BEST PRACTICE: Identification, Documentation, and Confirmationzorengubalane
This material presents the process and basic guidelines in the identification, documentation, and confirmation of best practice as introduced by SEDIP.
This document provides guidance on conducting situational analyses and setting program priorities for University of Wisconsin Extension offices. It discusses engaging community stakeholders throughout the situational analysis to build understanding and ownership. Case examples and tools are provided to aid in gathering and analyzing data on community needs, assets, and concerns. The priority setting process should consider available resources and involve the county oversight committee, as required by law, to identify priorities the Extension office will address. Communicating results builds further involvement.
Performance Assessment of Agricultural Research Organisation Priority Setting...iosrjce
IOSR Journal of Business and Management (IOSR-JBM) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of business and managemant and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications inbusiness and management. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
The document discusses program evaluation approaches for non-profits with limited budgets and timeframes. It outlines steps in the evaluation process including setting goals, developing a logic model, collecting and analyzing data, and using findings. Evaluations aim to set direction, expand impact, and ensure accountability. While limitations exist, evaluations seek to understand programs from stakeholders' perspectives in a credible, systematic way. The best uses of data are to inform future programming and demonstrate performance.
Library Strategy: Models and MeasurementStephen Town
This document provides models and frameworks for developing library strategies and measuring their impact. It discusses:
1. Definitions of strategy, characteristics of effective strategies, and common strategy frameworks that include analysis, choice, and implementation phases.
2. Examples of strategy documents from the Universities of York and Lund that focus on excellence, internationalization, and quality.
3. Models for analyzing the internal and external environment, including PEST, SWOT, scenarios, and portfolio analyses to inform strategic choices.
4. Approaches for implementing strategies through defined programs and projects and measuring success through critical success factors, balanced scorecards, and assessing value beyond traditional metrics.
This document discusses effective assessment practices for institutions of higher education. It outlines how assessment can benefit institutions by informing decisions about curriculum, programs, policies and student needs. The importance of assessment is that it allows institutions to measure goals and learning outcomes, improve student learning and teaching, and gauge institutional effectiveness. The document provides examples of assessment practices from Northeastern Illinois University and recommends strategies for developing institution-wide and program-level assessment systems, including establishing goals, collecting both direct and indirect measures of student learning, and using assessment data for continuous improvement.
Monitoring and Evaluation of Health ServicesNayyar Kazmi
This document provides an overview of monitoring and evaluation (M&E) of health services. It discusses the key differences between monitoring and evaluation, and explains that M&E is important to assess whether health programs and services are achieving their goals and objectives. The document also outlines the main components and steps involved in conducting evaluations, including developing indicators, collecting and analyzing data, reporting findings, and implementing recommendations.
Effectiveness is often referred to as doing the right thing, while efficiency is doing things right. Effectiveness is an external measure of process output or quality.
Monotoring and evaluation principles and theoriescommochally
This document discusses monitoring and evaluation (M&E) capacity in Tanzania. It notes that while M&E is important for improving development outcomes, many countries, including Tanzania, lack necessary M&E capacity at both the individual and institutional levels. Comprehensive training is needed to address gaps in M&E skills. The document outlines the differences between monitoring, which tracks project progress, and evaluation, which assesses outcomes and impacts in more depth. Both M&E are important management tools that provide useful feedback when integrated.
The document provides guidance for trainers on conducting a workshop to teach participatory monitoring and evaluation techniques to project staff, with the goal of enabling staff to help communities set up their own participatory monitoring and evaluation systems to assess project activities and impacts. It outlines pedagogical approaches, content, and logistical details to structure the training sessions, with a focus on practical implementation of participatory monitoring and evaluation at the local level.
The document provides a recommendation for implementing the LA County Strategic Plan for Economic Development from 2016-2020. It proposes that LAEDC create two new committees focused on liveability and innovation, in addition to leveraging existing committees. It also recommends forming a steering committee of executive leaders to coordinate efforts. LAEDC should serve as a facilitator, assisting committees and preventing duplicative work. Metrics were also reviewed and over 90 were identified to measure progress across the seven goals of the strategic plan. The report details tools to engage stakeholders and keep them involved in the implementation process.
Principles, Steps of programme planning, evaluation and monitoring of program...GBPUA&T, Pantnagar
The document discusses key aspects of developing effective agricultural extension programs, including collecting relevant data, analyzing the current situation, identifying problems, setting objectives and goals, developing a plan of work and calendar, implementing and monitoring the plan, evaluating progress, and revising the program as needed. It emphasizes establishing significant objectives based on farmers' needs that can be realistically achieved given available resources and time, and involving local stakeholders throughout the entire process from planning to implementation to evaluation.
The document summarizes recommendations for implementing LA County's Strategic Economic Development Plan by engaging stakeholders. It recommends that LAEDC:
1) Serve as a facilitator between champion organizations and stakeholders, provide resources and expertise, and collect and report on progress.
2) Use a website, events, marketing tools, and surveys to raise awareness of the plan and engage stakeholders over 3 years, with kickoff and progress report events.
3) Monitor key indicators across 7 goals related to education, industry, innovation, business environment, infrastructure, trade, and livability.
The document discusses monitoring and evaluation of education programs for sustainable development. It aims to identify learning processes aligned with ESD and their contributions. Key learning processes include collaboration, engaging stakeholders, and active participation. ESD learning refers to gaining knowledge as well as learning critical thinking and envisioning positive futures. However, data on ESD processes and outcomes is limited. The review recommends improved data collection focusing on experiences rather than literature. More evidence is still needed to fully understand ESD's contributions to sustainable development.
Collaborative 2 ingrid margarita and sandraSandra Guevara
This document provides guidance on project evaluation. It discusses what project evaluation is, its importance in project design and implementation, additional benefits like project improvement and capacity building. It outlines the planning, data collection, analysis, and reporting process for evaluations. Key steps include examining issues and objectives, establishing a team, identifying the purpose, focusing on improvement, assessing outcomes and impacts, and creating a report to synthesize findings. The goal is to help determine what is and is not working to improve the project.
What is program evaluation lecture 100207 [compatibility mode]Jennifer Morrow
The document discusses what program evaluation is, including defining it as the systematic collection of information about program activities, characteristics, and outcomes to improve effectiveness and inform decision making. It also outlines the types and purposes of evaluation, how to prepare for and conduct an evaluation by developing a logic model and methodology, and important considerations around data collection, analysis, and ethics.
The document discusses establishing evaluation criteria and methods for mentoring programs. It recommends developing plans to measure both program processes and expected outcomes. For outcomes, programs should specify expected impacts, select instruments to measure them, and implement an evaluation design. The final stage involves refining the program based on findings and disseminating regular reports to stakeholders. Conducting rigorous evaluations requires expertise and can cost $5,000-$10,000 but is important for accountability and demonstrating a program's effectiveness.
El documento describe un cielo encendido con alas de ángeles y marejadas de fuego, proporcionando aire fresco para el alma. Contiene una danza lenta y una descripción etérnica que será sofocada por la luna. El documento incluye fotos y texto en español y hñäñho con créditos al autor y traductor.
Accord politique pour l'organisation d'élections apaisées, crédibles et trans...Trésor Kibangula
La Majorité présidentielle (MP) et une partie de l'opposition congolaise ont signé, le 18 octobre, un accord politique qui autorise le président Joseph Kabila à se maintenir au pouvoir jusqu'à l'installation effective d'un nouveau président élu. Le compromis donne aussi à l'opposition le poste de Premier ministre.
This document discusses using Ansible to manage instances on CloudPlatform. It provides an overview of configuration management and why Ansible is useful. It then covers installing and configuring Ansible, including basic terminology. Finally, it discusses use case scenarios for Ansible and links to resources for deploying CloudStack components using Ansible.
Este documento presenta cuatro proyectos arquitectónicos notables. El primero es un mirador en las montañas de Jalisco, México que ofrece un lugar de descanso para peregrinos. El segundo es un hotel de lujo en Islandia que aprovecha el paisaje natural. El tercero son habitaciones minimalistas en Baja California, México que flotan sobre soportes delgados. El cuarto es una oficina flotante en Helsinki, Finlandia construida para combatir temperaturas extremas. Los proyectos demuestran cómo la arquitectura puede inspir
Este documento presenta una breve introducción a las relaciones humanas. Define las relaciones humanas como el conjunto de interacciones entre individuos de una sociedad, y clasifica las relaciones en primarias y secundarias. También describe algunas características comunes de las relaciones humanas como estrechas, antagónicas, personales y factores positivos y negativos que afectan las relaciones.
La teoría de las relaciones humanas surgió en Estados Unidos como reacción a los resultados del experimento de Hawthorne, que mostraron que factores sociales y psicológicos afectan el desempeño laboral. Esta teoría, desarrollada por Elton Mayo y otros, se opone a la visión mecánica de la teoría clásica al enfatizar que las personas y sus necesidades son fundamentales en las organizaciones.
El documento habla sobre la importancia de los sistemas de información (SI) para una organización y la necesidad de implicación de la dirección. Explica que los SI almacenan, procesan y distribuyen información de manera eficaz y eficiente para apoyar la toma de decisiones. También destaca algunos problemas comunes como la seguridad y la gestión del conocimiento, y la necesidad de normas como ISO 9000 e ISO 38500 para abordar estos problemas. Concluye que los SI merecen atención especial de los responsables de la organización para garantizar
101 things i learned in architectural school nien tsaiNien Tsai
El documento resume 101 cosas que el autor aprendió en la escuela de arquitectura, incluyendo conceptos fundamentales como la teoría de figura-fondo, la importancia del espacio vacío, y cómo la luz y forma influyen en la experiencia espacial. También cubre temas como la organización espacial, la presentación de proyectos, y la importancia de considerar el contexto más amplio en el diseño arquitectónico.
This document discusses the analysis of organized retailing in India using PESTEL, Porter's Five Forces model, and SWOT analysis frameworks. It analyzes the political, economic, social, technological, legal, and environmental factors. It examines the threat of new entrants, threat of substitutes, power of suppliers and buyers, and rivalry among existing firms. The SWOT analysis identifies the strengths of organized retailing like variety and offers under one roof, weaknesses like being restricted to value offerings, opportunities like increasing mall culture and rural penetration, and threats like changing policies and competitors.
Cloudstack: the best kept secret in the cloudShapeBlue
Apache CloudStack is a scalable, multi-tenant, open-source cloud orchestration platform that provides infrastructure as a service. It has been in widespread production use for over 6 years, powering major public clouds and enterprise private clouds. Despite its proven track record and growing user community, CloudStack remains relatively unknown compared to other open-source cloud platforms.
Este documento presenta información de contacto para Luz Amparo Ceron y Carlos Enrique Plaza, directores de Catálogos en Cali, Colombia. Se proporcionan números de fax, WhatsApp y celular, así como la dirección web de la compañía katalogoscolombia.com.
Usaid tips 01 conducting a participatory evaluation-2011 05Fida Karim 🇵🇰
This document outlines the key aspects of conducting a participatory evaluation. It discusses that participatory evaluations involve stakeholders throughout the evaluation process, from planning to acting on results. The main characteristics are that they are focused on participant needs, include diverse views, and are a learning process. Rapid appraisal methods are used to gather empirical data. Facilitators guide participants, who conduct the evaluation themselves. The benefits are that participants are empowered to improve performance, while disadvantages can include perceived lack of objectivity. The steps outlined include deciding on participation level, scope of work, team planning, data collection, analysis, and creating an action plan.
The document provides guidance on best practices for evaluation, impact, and sustainability of student partnership projects. It recommends identifying the rationale and need for evaluation upfront, choosing appropriate qualitative and quantitative evaluation approaches, adopting a range of data collection techniques, identifying impact on stakeholders and the institution, developing case studies for communications, and developing recommendations to support sustaining student partnerships as part of the evaluation process.
This document provides guidance on monitoring and evaluation for partnership-based programs. It discusses the importance of changing the mindset around M&E from merely justifying expenditures to a collaborative learning process. Donors are encouraged to make M&E a learning partnership rather than a performance test. Effective M&E requires a balanced mix of quantitative and qualitative methods. Numbers alone do not capture impact; seeking contributions to meaningful change is more important. Both donors and partner organizations must commit to supporting M&E throughout implementation and using findings to strengthen future work.
Curriculum monitoring involves periodically assessing curriculum implementation and making adjustments. It determines how well the curriculum is working and informs decisions about retaining, improving, or modifying aspects. The document outlines the definition, rationale, types, roles, process, and similarities and differences between monitoring and evaluation. An effective monitoring system is simple, provides timely feedback, is cost-effective, flexible, accurate, comprehensive, relevant, and leads to learning. It involves clarifying roles, identifying evidence, data collection tools, training monitors, preparing staff, conducting monitoring, analyzing and sharing results, and determining a plan of action.
1. Assessment for learning is different from assessment of learning in that it is used to help students learn better rather than evaluate learning. It helps students and teachers see learning goals, a student's progress, and next steps.
2. Research shows that assessment for learning is one of the most powerful ways to improve learning, especially for students who find learning challenging. It helps students learn better now and achieve more throughout their education.
3. Classroom assessment techniques developed by teachers help make the learning process more methodical and systematic by providing feedback to improve teaching methods.
Design for complexity, using evaluative methodsAnn Larson
Programs can be designed to be more likely to be effective in producing positive change in settings that can be characterized as complex adaptive systems. This presentation describes what we already know about what makes programs more likely to be successful in changing behaviour. Next, it explores the organizational blind spots and human nature which prevent us from making better designs. Finally, it shows how evaluators can guide better program design using standard and emerging methods.
Guidance paper leadership of strategic improvement planning and self evaluati...Lucie Fenton
Is your strategic improvement planning process as effective as it could be? ASCL Curriculum and Assessment Specialist Suzanne O’Farrell has written a new guidance paper to help senior leaders, governors and trustees to improve their processes for strategic planning and self-evaluation.
The paper sets out four elements of the strategic planning process and outlines five actions that strategic leaders carry out. Suzanne says, “Defining clear priorities and understanding institutional strengths and weaknesses have never been more critical.”
Formative, summative, and diagnostic evaluations are important strategies for curriculum development. Formative evaluation identifies problems early to allow corrective action, while summative evaluation occurs at the end of a project to assess its overall value. Diagnostic evaluation analyzes curriculum when content is updated or changed. Evaluations can be done by insiders like teachers and outsiders like consultants, with each having advantages and disadvantages. The best approach uses a combination of internal and external evaluators to increase the reliability and validity of results while encouraging stakeholder participation and ownership.
This guide has been produced for Our Place areas who are implementing their Operational Plans, to support you to explore the reasons and uses for evaluation, and why it might help to add value to your work. It explores the principles that underpin robust (but realistic) evaluation, presenting guidelines that you can use to inform the development of your own evaluation plan.
Indistar® is a web-based tool that guides a district or school team in charting its improvement and managing the continuous improvement process. You might call it a change management tool. Indistar® is a platform adapted by each State to fit its needs. Indistar® is called different things in different states. For example, Illinois calls it Rising Star, Alaska calls it STEPP, Idaho and Oklahoma call it the WISE tool, and the Bureau of Indian Education calls it Native Star. The system is also tailored for the purposes of each state, its districts, and its schools.
This document provides an overview of planning and managing by objectives. It defines types of plans such as mission, objectives, strategies, policies, procedures, rules, programs and budgets. It outlines the steps in planning including being aware of opportunities, establishing objectives, developing premises, determining alternative courses, evaluating alternatives, selecting a course, formulating derivative plans and quantifying plans with budgets. Objectives are discussed including their nature, setting objectives across organizational hierarchies, and making objectives verifiable. Evolving concepts of management by objectives are presented along with benefits and potential failures.
The document outlines the objectives and process for conducting an assessment to evaluate a council's capability in managing change and other processes. The assessment will examine evidence from interviews, meetings, observations, and audits against 10 indicators relating to strategic business improvement, learning and development strategies, people management strategies, leadership and management, effective management, reward and recognition, ownership and commitment, effective learning, performance management, and continuous improvement. The outcome of the on-site analysis will be hot feedback identifying the council's level of achievement, areas for development, and a development plan.
A Good Program Can Improve Educational Outcomes.pdfnoblex1
We hope this guide helps practitioners and others strengthen programs designed to increase academic achievement, ultimately broadening access to higher education for youth and adults.
We believe that evaluation is a critical part of program design and is necessary for ongoing program improvement. Evaluation requires collecting reliable, current and compelling information to empower stakeholders to make better decisions about programs and organizational practices that directly affect students. A good evaluation is an effective way of gathering information that strengthens programs, identifies problems, and assesses the extent of change over time. A sound evaluation that prompts program improvement is also a positive sign to funders and other stakeholders, and can help to sustain their commitment to your program.
Theories of change are conceptual maps that show how and why program activities will achieve short-term, interim, and long-term outcomes. The underlying assumptions that promote, support, and sustain a program often seem self-evident to program planners. Consequently, they spend too little time clarifying those assumptions for implementers and participants. Explicit theories of change provoke continuous reflection and shared ownership of the work to be accomplished. Even the most experienced program planners sometimes make the mistake of thinking an innovative design will accomplish goals without checking the linkages among assumptions and plans.
Developing a theory of change is a team effort. The collective knowledge and experience of program staff, stakeholders, and participants contribute to formulating a clear, precise statement about how and why a program will work. Using a theory-based approach, program collaborators state what they are doing and why by working backwards from the outcomes they seek to the interventions they plan, and forward from interventions to desired outcomes. When defining a theory of change, program planners usually begin by deciding expected outcomes, aligning outcomes with goals, deciding on the best indicators to evaluate progress toward desired outcomes, and developing specific measures for evaluating results. The end product is a statement of the expected change that specifies how implementation, resources, and evaluation translate into desired outcomes.
Continuously evaluating a theory of change encourages program planners to keep an eye on their goals. Statements about how and why a program will work must be established using the knowledge of program staff, stakeholders, and participants. This statement represents the theory underlying the program plan and shows planners how resources and activities translate to desired improvements and outcomes. It also becomes a framework for program implementation and evaluation.
Source: https://ebookscheaper.com/2022/04/06/a-good-program-can-improve-educational-outcomes/
The document provides an overview of the IFRC Framework for Evaluation, which guides how evaluations are planned, managed, conducted, and utilized by the IFRC Secretariat. The framework promotes reliable, useful, and ethical evaluations to contribute to organizational learning, accountability, and the IFRC's mission. It outlines key parts of the framework, including evaluation criteria to guide what is evaluated and standards and processes to guide how evaluations are conducted. The framework is intended to guide those involved in evaluations and inform stakeholders about expected practices.
programme evaluation by priyadarshinee pradhanPriya Das
This document discusses concepts, needs, goals and tools related to program evaluation. It defines evaluation as a systematic process to determine the merit, worth and significance of a program or intervention using set standards and criteria. The primary purposes of evaluation are to gain insight and enable reflection to identify future changes. Some key goals of program evaluation include improving program design, assessing progress towards goals, and determining effectiveness and efficiency. Common tools for program evaluation discussed include interviews, observations, questionnaires, and case studies.
This document provides guidance on designing program evaluations in 3-5 sentences. It discusses clarifying the program's goals and strategy, developing relevant evaluation questions, and selecting an appropriate evaluation design and approach. It also covers identifying appropriate data sources and collection procedures, developing plans to analyze data to allow for valid conclusions, and defining key parts of an evaluation plan such as objectives, information sources, data collection methods, and analysis plans.
Portfolio management knowledge development event details are provided, including an agenda for presentations and discussions. Research was conducted to discover effective portfolio management practices, with findings around key process questions and problems organizations face. References are made to other sources that provide guidance on aligning delivery, measurement, and governance cycles, establishing a portfolio management office, and using standards to prioritize projects.
Similar to Planning for Monitoring, Learning and Evaluation (20)
1. Planning for Monitoring,
Learning, and Evaluation
at Small- to Medium-sized Foundations
A Review
Produced for the Oak Foundation
By Cascadia Consulting Group, Inc.
July 2016
2. Planning for Monitoring, Learning, and Evaluation at Small- to Medium-sized Foundations
ii
CONTENTS
C o n t e n t s
Executive Summary 1
Formalizing Monitoring, Evaluation 3
and Learning Plans and Practices
Principles
Monitoring 4
Metrics / Indicators: What data to collect?
Data collection
Evaluation 6
Funding dedicated to evaluation
Staffing for evaluation functions
Number and frequency of evaluations
Evaluating sub-granting organizations
Learning 10
Using data for adaptive management
Capturing and sharing lessons
Conclusion 12
References 13
3. Planning for Monitoring, Learning, and Evaluation at Small- to Medium-sized Foundations
1
E x e c u t i v e S u m m a r y
T
his report is based on findings from desktop research and interviews
with selected foundations conducted between April and June
2016. It was developed to give the Oak Foundation a sense of how
other foundations are tackling monitoring, evaluation, and learning (MEL)
questions, and to show a range of options for Oak to consider as it develops
its own MEL Plan. This summary of findings was developed for public
distribution, anticipating that it may be useful for other donors.
Key trends that emerged from the interviews and desktop research included
the following:
1. Foundations are spending more resources and putting more staff time
into evaluation than they did in the past. Staff at smaller foundations
tend to spend more time on individual grant evaluations, while staff at
larger foundations tend to spend more time on assessments of broad
program areas and on learning processes. While many foundations do
not have consistent systems for tracking evaluation spending, some
are deciding it would be useful to capture that information more
methodically.
2. Less attention has been put on learning to-date, but recognition of the
importanceofpurposefullearningisgrowingquickly.Manyfoundations
are hoping to improve upon their learning processes, but finding that it is
not easy. It often requires an internal cultural shift and testing a variety of
approaches. In contrast, foundations tend to have fairly clear processes
and standards for monitoring and evaluation. Foundations that do have
explicit learning efforts remain more focused on internal learning rather
than communicating and sharing lessons externally. Foundations tend to
be more transparent with external audiences about their grant-making
processes, goals, and strategies, and less transparent about how they
assess performance or their lessons learned. That said, both grantees
and foundations are recognizing that sharing more lessons externally
would be beneficial.
3. Foundations are exploring appropriate and useful ways to evaluate
work done through sub-granting organizations. Some are focusing
on building the internal monitoring and evaluation capacity of those
organizations. It would be useful for donors to coordinate approaches
to evaluate work done through sub-granting organizations, which can
allow for pooled resources and avoid putting an extra burden on the sub-
grantor.
EXECUTIVE SUMMARY
4. Planning for Monitoring, Learning, and Evaluation at Small- to Medium-sized Foundations
2
E x e c u t i v e S u m m a r y
Emerging best practices
1. Lay out a set of explicit principles to guide monitoring, evaluation, and learning practices across
the foundation or across the program. Other foundations’ guiding principles often emphasize the
need to ensure that findings are actionable and integrated into ongoing decision-making. They are
also likely to address the intended roles of grantees and third-party evaluators.
2. Base the evaluation framework on the concept of testing a strategy or hypothesis. This may also
be called a theory of change or a rationale.
3. Plan out evaluations very early in strategy development. Early MEL planning helps with budgeting,
ensuringthattherightbaselineinformationiscollected,andclarifyingassumptionsandhypotheses
that could be tested to facilitate adaptive management. At some foundations, Trustees or board
members review the proposed evaluation plan before approving a program or grant investment.
4. Streamline indicators and monitoring efforts. Make sure that staff and grantees only measure
things that are expected to directly apply to decision-making about strategy or future investments.
Data collection can be time- and resource-intensive. More data is not necessarily better. Indicators
should be strategically chosen.
5. Use third-party evaluators for most or all evaluations. Third-party evaluators provide additional
capacity and are critical for ensuring objectivity. Having foundation staff engaged along the way is
also important to provide data inputs and to make sure the evaluation will ultimately be useful to
inform foundation decision-making.
6. Review in-house staff skills and consider building capacity through internal trainings or by
forming an external advisory committee. External advisory committees can be permanent—to
assist with all foundation or program evaluations—or they can be ad-hoc committees created for
specific evaluations where additional expertise or peer review would be helpful. They advise on
evaluation scopes and questions, and do not replace third-party evaluators who undertake the
actual work of evaluation. Forming and managing an external advisory committee does take some
resources and staff time.
7. Consider instituting new practices to ensure that data and evaluation findings are used for
adaptive management. For example, think about setting aside regular reflection time (as part
of existing meetings or special events), ensuring that Trustees communicate the importance of
learning to the organization, incorporating related metrics into staff performance evaluations, and
expanding the audience for evaluation findings by pulling out lessons that are broadly applicable
across programs.
8. Involve foundation communications staff early in conversations about sharing findings and
lessons externally. Communications staff have a key role to play. Monitoring, evaluation, and
learning work does not have to fall only to program officers or dedicated M&E staff.
5. Planning for Monitoring, Learning, and Evaluation at Small- to Medium-sized Foundations
3
Principles
Some foundations have laid out explicit principles
that guide their monitoring, evaluation, and learning
approaches. The principles often address questions like
the following:
• What is the motivation for pursuing evaluation and
learning? Is the foundation evaluating for proof/
accountability or for learning/program improvement,
or both?[6]
• How are monitoring, evaluation, and learning efforts
integrated into strategy design or grant-making
decisions?
• How should grantees be involved? Who else needs to
be involved? Is it important to minimize the burden on
partners, staff, or grantees?
• Does the foundation feel it is critical to use third party
evaluators?
• How important is it to share findings with external
audiences?
F
oundations have documented their monitoring,
evaluation, and learning practices and policies
to varying degrees. Some of the documents are
intended for purely internal use, while others help
communicate policies and priorities to grantees and other
external audiences.
For example:
• The David and Lucile Packard Foundation used to
have a 137-page Standards document on strategy
developmentandM&E,andnowhasa4-pageguidance
document. Program officers are encouraged to create
a plan that is right-sized for the project and that works
for them.[1]
• The Children’s Investment Fund Foundation (CIFF)
distributed a “Monitoring and Evaluation Principles
and Practices for Partners” document that has a
checklist of quality control measures.[2]
• The W.K. Kellogg Foundation has an Evaluation
Handbook intended to encourage and aid grantees in
conducting their own evaluations.[3]
• The William and Flora Hewlett Foundation has an
Evaluation Principles and Practices document that
aims to make evaluation practices more consistent,
clarify staff roles and available support, and accelerate
the design of meaningful evaluations.[4]
The Hewlett Foundation also includes language about
evaluation in agreements with grantees so that they are
aware that the foundation may choose to commission
an evaluation that examines the work undertaken with
grant funds. Typically, these evaluations include multiple
grantees working toward similar goals. Then, if Hewlett
does plan an evaluation that includes the grant, the
foundation communicates the proposed evaluation
questions and approach to the grantee in greater detail,
along with any plans to share the findings from that
evaluation so that others may learn from the foundation’s
successes and failures.[5]
FORMALIZING MONITORING, EVALUATION
AND LEARNING PLANS AND PRACTICES
F o r m a l i z i n g M o n i t o r i n g , E v a l u a t i o n , & L e a r n i n g P l a n s & P r a c t i c e
6. Planning for Monitoring, Learning, and Evaluation at Small- to Medium-sized Foundations
4
Metrics / Indicators: What data to collect?
Several foundations are taking steps to avoid over-
measuring and to ensure that indicators are carefully
selected based on their anticipated direct usefulness for
learning or applicability to decision-making.
Packard, for example, emphasizes that staff shouldn’t
try to measure everything, but rather focus on areas
where assumptions are uncertain, there are doubts about
strategy,ortherearebigcostdifferencesbetweenpotential
strategies.[6]
CIFF is trying to take a hard cost-benefit
approach to monitoring (and evaluation)—if program
officers don’t know who will use the information, there’s
no reason to monitor it.[7]
CIFF grants used to have dozens
of indicators each, but they are now focusing in on fewer,
more meaningful indicators, using the theory of change to
guide which indicators are most meaningful.[7]
One level up from individual grants, the Robert Wood
Johnson Foundation asks each grant-making team to
identify three strategic objectives, and no more than three
measures for each objective.[8]
The Wallace Foundation used to have a comprehensive
scorecard, but they found that it actually provided too
much detail to be clear or actionable.[9]
They decided to be
moreselectiveintermsoftopicscoveredanddataselected,
and they began to display progress against targets using
speedometer-like gauges and short summaries of key
findings.[9]
The Nature Conservancy’s Africa region has a more
extensive list of indicators (over 100) because they want to
be able to do impact evaluations and capture unexpected
impacts.[10]
They have found that the additional staff time
and cost associated with collecting extra data—beyond
what may be required for a performance evaluation
requested by a funder—is minimal, and can pay off if it
makes an impact evaluation possible.[10]
Data collection
A sample of data collection methods is listed in the text
box. It is important to consciously consider how the
foundation will use the data collected to make decisions,
and eliminate data collection activities or grant report
questions that will take time without resulting in directly
useful information.
Foundations use a range of methods to collect
data. These include:
• Surveys to measure attitude change.[11]
• Individual or group interviews.
• Content analysis of media publications,
ordinances, or legislation.[11]
• Site visits or phone calls.
• Observation.
• Written questionnaires.
• Knowledge or achievement tests; pre- and
post-tests.[3][11]
• Focus groups with key informants who
have directly observed changes in
community attitudes or behaviors.[11]
• Periodic feedback forums facilitated by a
neutral party where project participants
provide feedback on activities.[44]
• Technologies like DHIS2 and Magpi. Data
collectors can use mobile phones online or
offline to collect data in the field and then
send it up to the foundation.
Some of these methods may lend themselves
better to ongoing monitoring, and others to
informing specific evaluations.
M o n i t o r i n g
MONITORING
7. Planning for Monitoring, Learning, and Evaluation at Small- to Medium-sized Foundations
5
It could be practical to collect data at different times for
different reasons, including convenience, the presence of
key staff and grantees, or optimal timing to inform specific
decisions. For example, data collection could be timed to
occur:
• At the conclusion of a grant year.
• Biannually, annually, quarterly, or monthly.
• During events or meetings.
• During key events and critical moments.[11]
• Prior to known reporting and planning times (e.g.,
start of annual planning and budgeting process).[12]
• During moments that are important for reflecting on
and refining a strategy (e.g., an election, the end of a
pilot project, or a juncture in an experimental part of
the strategy).[12]
The David and Lucile Packard Foundation notes that it may
be helpful to create a timeline over the first couple of years
of a strategy in order align activities, internal reporting
timelines, and data collection timelines.[12]
Dashboards
Some foundations use dashboards to house collected data
and make it accessible for staff. Dashboards may include
data on:
• Internal operations, with metrics to track efficiency in
grant-making.
• Program spending (what has been allocated versus
what was budgeted).[13]
• Grant highlights and indicators of program impact.
One of the critiques of using dashboards for tracking
program impact is that the format encourages
oversimplification.[13]
On the other hand, it can be a good
visual way to provide information to board members or
trustees, and it can show impacts at a glance rather than
using lots of text.
The European Climate Foundation launched an online
platform for Planning, Assessment and monitoring,
Reporting and Learning (PARL) in 2014. One goal was to
harmonize approaches and language used for planning
and monitoring by different teams.[14]
Among other
things, PARL captures indicators, scores, progress, and
key lessons. ECF found that to make the platform useful,
it was important to invest time in ensuring consistency in
information inputs, and quality of indicators and progress
statements.[14]
Other foundations that have dashboards include the
Robert Wood Johnson Foundation, Charles and Helen
Schwab Foundation, Lumina Foundation, James Irvine
Foundation, and Marguerite Casey Foundation.[15]
M o n i t o r i n g
8. Planning for Monitoring, Learning, and Evaluation at Small- to Medium-sized Foundations
6
T
he reason for an evaluation informs the
methodology, timing, and spending. As the Annie
E. Casey Foundation has said, “foundations may
implement evaluation to monitor grantee performance, to
inform strategy development and improvement, to build
knowledge across a given field, to build capacity to address
particular issues, to strengthen and expand support for
a policy or social change goal, or a combination of these.
All of these decisions will shape a foundation’s evaluation
practice.”[11]
Funding dedicated to evaluation
The trend is for foundations to spend more on evaluation
than they have in the past. In a 2013 benchmarking report,
50 percent of the 31 foundations that were surveyed said
that their evaluation investments had increased during
the previous two years relative to grant-making, and 30
percent said their evaluation investments had stayed the
same.[13]
See Table 1 for data on how much some foundations
spend on evaluation. Note that most foundations lack
consistent systems for tracking evaluation spending, so
benchmarking data isn’t always perfectly accurate.
Most funding data relates specifically to evaluation, not
to monitoring or learning. However, in planning an annual
budget, it can be helpful to include costs associated with
learning processes (e.g., retreats and communications) as
well as evaluations.[19]
Other expenses that are often not
includedinthesefiguresarethoseassociatedwithbuilding
the capacity of grantees to generate and use monitoring
data.[7]
E v a l u a t i o n
EVALUATION
Foundation Funding for evaluation Source
Conventional Wisdom • 5-10% of programmatic budget. [16]
Average • 3.7% of programmatic dollars (2010).
• Larger foundations spend a smaller percentage of their budgets on evaluation
because the costs don’t rise proportionally with program costs).
• 0.7- 7.5% of program spending (2014).
• Median spending on formal evaluation is 2% of a grant-making budget
• Many foundations spend less than 1%.
[17]
[16]
[18]
[19]
[1]
Irvine • 5-12% percent of program costs. [20]
Kellogg • 5-7% of a project’s total budget. [3]
CIFF • 6% on third party evaluations (3% in the climate program and 10% in other
programs, because of differences in evaluation types. No set rule; spending
reflects CIFF’s “fit-for-purpose” approach.)
[7]
Hewlett • 0.7-1.2% of programmatic dollars between 2011 and 2014 (they can also
spend administrative funds).
• Aiming to increase to 2% and improve systems for tracking evaluation
expenditures.
[17]
[21]
Table 1. Percentages of foundation program budgets spent on evaluation.
9. Planning for Monitoring, Learning, and Evaluation at Small- to Medium-sized Foundations
7
Staffing for evaluation functions
As with budgetary resources, the average number of staff
dedicated to evaluation has also tended to increase in
recent years, particularly for medium-sized and large
foundations. Foundations had an average of 3 full-time
employees for monitoring and evaluation in 2009; this
increased to 4.2 in 2012.[13]
The majority (three-quarters) of
foundations who responded to a study by The Foundation
Review had at least one full-time employee dedicated
to evaluation-related activities.[13]
See Table 2 for more
information on how foundations are staffing MEL efforts.
For supporting learning, relevant responsibilities could be
carried out by human resources, communications, or IT
staff, rather than a dedicated learning officer.[23]
Evaluation advisory committees
Some foundations have external evaluation advisory
committees. As of early 2012, this included the Annie E.
Casey Foundation, the Rockefeller Foundation, and the
Skillman Foundation.[24]
Typically, these committees meet
1-4 times per year and have 4-8 members; committee
members are compensated for their time.[24]
Some
committees advise on evaluation across the foundation
on an ongoing basis, and others are ad-hoc committees
focused on specific initiatives. Most committees are
organized and run by foundation staff, while some are
managed by consultants.[24]
Hewlett has used an evaluation advisory committee on
some of its evaluations and found it to be very helpful. An
added benefit was that the committee included people
who might provide follow-on funding for the grantees that
were being evaluated.[5]
Evaluation advisory committees
can be useful as a sounding board, and for providing peer
review of the evaluation design and product—especially
for foundations with few internal M&E staff—and for
filling knowledge gaps in specific content areas.[24]
These
committees can help build the credibility of evaluation
findings and boost foundation confidence.[24]
On the other
hand, committees increase expenses and require staff time
to attend meetings.
E v a l u a t i o n
Foundation Funding for evaluation Source
Average • 5.3% of full-time equivalents (FTEs) for smaller (<$50M) foundations, which also had the
greatest variation (0.8% to 13.8%).
• 5.7% of FTEs for medium-sized ($50-200M) foundations.
• 4.2% of FTEs for large (>$200M) foundations, where the number of M&E staff grew from 5 to
10 FTEs between 2010 and 2012.
[22]
[16]
CIFF • CIFF has embedded Evidence, Measurement and Evaluation (EME) people into each team.
The embedded model helps ensure that evidence is incorporated into the investment design
and that evaluation is incorporated through the program lifecycle. They work collaboratively
but also have ways to protect independence: for example, final decisions on what to evaluate
rest with those EME staff and the EME director.
[7]
Hewlett • Hired an evaluation officer in 2013 to provide technical assistance to programs. Each program
is still responsible for commissioning their own evaluations; they may decide to make each
program officer responsible for their own evaluations, or to designate a team member to lead
evaluation efforts.
• Program officers spend 5-20% of their time designing and managing evaluations and
deciding how to use the results. They are expected to be managing one significant evaluation
at any given time.
[17]
[4]
Robert Wood
Johnson
• 23 of the ~300 staff (over 7%) are in their Research, Evaluation, and Learning Department;
those staff members spend 70% of their time on centralized M&E work and 30% on program-
specific M&E activities.
[16]
Table 2. Sample foundation staffing patterns for M&E.
10. Planning for Monitoring, Learning, and Evaluation at Small- to Medium-sized Foundations
8
Number and frequency of evaluations
There seems to be agreement in the field that foundations
don’t have to evaluate everything. One reason is that
some things have already been evaluated by another
donor or organization.[7]
Another reason is that evaluating
everything can be an unnecessary burden for foundation
staff and grantees. With limited resources, many also
believe it is better to do a few in-depth, high-quality
evaluations instead of a large quantity of evaluations.[13]
Hewlett applies criteria to make decisions about where to
prioritize the use of evaluation funds; these criteria include
opportunities for learning, any urgency to make course
corrections or future funding decisions, the potential for
strategic or reputational risk, size of investment (as proxy
for importance), and expectations of a positive expected
return from dollars invested in an evaluation.[4]
Packard
suggests asking whether the foundation will really use
all of what is in the evaluation plan, and how they will use
collected data to make concrete decisions.[6]
Another trend is trying to routinely plan for evaluations
from the initiation of a new grant or strategy, in order to
budget sufficient resources, collect necessary baseline
data, and develop thoughtful evaluation questions that
relate to the original theory of change. Table 3 summarizes
practices from four foundations.
Manyofthefoundationsstudiedemphasizetheimportance
of third-party evaluations. Rockefeller, for example, states
that “third-party evaluations tend to be clearer, more
accurate, and more revealing than those conducted by
untrained staff” while also acknowledging that they can be
expensive. Table 4 summarizes views from selected other
foundations.
E v a l u a t i o n
Foundation Standard Practice Source
Packard • Develops a draft evaluation plan with associated costs while the strategy is being designed.
Program officers decide at the outset where in the life of the strategy they will likely need to
dig deeper.
[6] [1]
CIFF • Asks the sector team for an investment memo that includes an evidence review
(sustainability,likelihoodofsuccess)completedbyEMEstaffbasedontheprogram’stheory
of change. They also define key evaluation questions based on who needs to know what and
when. If it is a new program, the evaluation plan may change later on. A grant that is testing
something may warrant a more rigorous impact evaluation, whereas other evaluations might
focus on process and learning.
[7]
Hewlett • Strategically chooses what to evaluate and what not to evaluate.
• Teams plan for evaluation as strategies are developed, in order to clarify what success will
look like and ensure that good baseline data can be collected.
• Most evaluations look at a strategy rather than a single grant, are generally timed for the mid-
point and conclusion, and are intended to generate lessons useful for multiple stakeholders
both inside and outside the Foundation. Across the life of a strategy, there are annual progress
reports and every-other-year formal grant evaluations to inform possible course corrections.
Outside reviewers evaluate overall progress at the end of the seven years covered by the
strategic plan.
[5] [4]
[25] [17]
Babcock • Operatesonaten-yearplanninghorizon.Projectsandportfolioshavebeensubjecttoaformal,
rigorous mid-course (five-year) review, which involves adding up results and determining
what has been learned about the strategy and what needs to be tweaked in the approach.
These longer cycles have been useful for evaluating and adapting overall strategies; however,
they also use shorter-term learning cycles to adapt work with individual grantees.
[26] [27]
Table 3. Sample foundation practices for the timing and number of evaluations.
11. Planning for Monitoring, Learning, and Evaluation at Small- to Medium-sized Foundations
9
Evaluating sub-granting organizations
Evaluating sub-granting mechanisms can take several
angles:
• Evaluating the impact of the sub-grants.
• Evaluating the added value of the intermediary.[4]
• Evaluating how the intermediary’s performance
compares to that of other intermediaries.[4]
Hewlett and CIFF have both been thinking about
approaches to evaluating sub-granting or re-granting
organizations. Hewlett considers that “because we
are delegating to these intermediaries what might be
considered our steward¬ship role, we have an even greater
responsibility to evaluate their efforts.”[4]
Foundations can work in partnership with the sub-
granting organizations to conduct the evaluations. For
example, for one evaluation that involved several sub-
granting organizations, Hewlett sent a proposed plan
(with questions, intended audience, and timeline) to the
intermediaries to provide feedback.[5]
Hewlett did the
Request for Proposals (RFP) to hire an evaluator, and then
asked the evaluator to work with the intermediaries and
provide tailored reports for each in addition to the public
report.[5]
Another complementary route is to help support the
development of M&E systems within sub-granting
organizations.[7]
CIFF,forexample,prioritizeshelpingECFbe
able to report and use data and evidence themselves (e.g.,
for reporting to their own Board or senior management)
and secondarily to report to donors like CIFF.[7]
Even when
intermediaries have strong internal M&E capabilities,
some evaluations should still be managed by the donors
and undertaken by external evaluators, depending on the
scope and focus.[4]
When multiple funders support the same sub-granting
organization, it is useful for the funders to coordinate
efforts to evaluate—and/or build the evaluation capacity—
of that organization and its sub-grantees.[7]
That can
reduce the burden on the sub-granting organization and
make more efficient use of donors’ evaluation resources.
E v a l u a t i o n
Foundation Perspectives on third party evaluations Source
St. David's
Foundation
• Uses third party evaluators and provides guidance on selecting evaluators. [28]
Cargill • Uses consultants for an independent third party view and extra capacity. The M&E
team meets with the consultants weekly and the interaction is highly collaborative.
[16]
Walton • Evaluations use publicly available data and in-house capacity where possible and
appropriate, but some do require commissioned research or external evaluators.
[29]
CIFF • Over 80% of CIFF investments are independently evaluated.
• Where possible, CIFF opens these external evaluations up to competition.
[2]
[30]
Hewlett • Defines evaluation to mean specifically third party evaluation. When they
commission an evaluation it’s because they want third party feedback.
• In contrast, monitoring activities are typically done internally.
[5]
[4]
Babcock • An outside professional consultant was used for the 10-year assessment. Data is
provided by foundation staff.
[26]
Table 4. Use of third-party evaluations by a sample of foundations.
12. Planning for Monitoring, Learning, and Evaluation at Small- to Medium-sized Foundations
10
Using data for adaptive management
Learning efforts have multiple goals in the context of
foundations. They can include understanding progress,
identifying problems, being able to make adjustments
in a timely manner, and making increasingly well-
informed investments in future grant cycles. Foundations
have typically spent less time on learning compared
to monitoring and evaluation. In a 2013 benchmarking
study, only large foundations said more than 10 percent
of evaluation staff time was spent on learning activities.[13]
Itcantakeaculturalshiftandhigh-levelleadershiptomake
learning more of a focus. One source suggested holding
discussions with both staff and board members about how
to strengthen learning practices so that they improve the
work of the organization and its grantees.[23]
It is also rare
for managers to consider the effective use of evaluation
findings when assessing staff performance.[13]
A 2012 study
by the Center for Evaluation Innovation found that the
biggest challenges that program staff face in effectively
using evaluations to inform their work are limited time
and heavy workloads (67%), timeliness of data (47%),
and the culture/attitude about evaluation (31%).[22]
Other
challenges mentioned included cost, limited capacity
for data and evaluation use, differences in capacity and
interest among staff, and lack of clarity on strategies,
outcomes, or indicators.[22]
To address the barriers that impede using data for
adaptive management, foundations are trying out process
improvements such as:
• Setting aside regular reflection time.
• Using evaluation approaches and writing scopes to
ensure that data is returned quickly.[13]
• Building staff members’ evaluation capacity.
• Explicitly and consistently integrating data collection
and analysis as a core, ongoing part of program
design and implementation.[3]
• Ensuring that those who will be in a position
to use the evaluation results are involved early
in the process of planning and undertaking the
evaluation.[31]
Capturing and sharing lessons
Generating useful learning for adaptive management
requires a thoughtful approach to both capturing findings
and effectively sharing those findings with a range of
audiences.
In capturing lessons, it is important to make sure that
grantees feel comfortable communicating their mistakes
or perceived failures without fearing loss of the grant.[32]
The Skillman Foundation addresses this by sitting down
with grantees to review data that has been collected and
talk about where there seems to be progress and where
there doesn’t, and then to make action commitments that
could improve outcomes.[33]
Sample methods for capturing lessons:
• Develop specific questions to make
learning a focus during site visits.[33]
• Create a password-protected website for
sharing documents and data.[33]
• Create a community discussion board for
posting questions and insights.[33]
• Use consultants to conduct interviews
and focus groups and prepare quarterly
learning memos for discussion at staff
meetings.[33]
• Collect feedback from intended
beneficiaries through surveys, focus
groups, or workshops.[18]
• Identify the grantees with which the
foundation has a particularly trusting
relationship, and test learning approaches
with them.[33]
L e a r n i n g
LEARNING
13. Planning for Monitoring, Learning, and Evaluation at Small- to Medium-sized Foundations
11
Internal sharing
Table 5 lists a sample of approaches used by five foundations to share lessons internally.
L e a r n i n g
Foundation Standard Practice Source
Packard • Program teams work together on a holistic review of the last 12 months.
• For a 2014 Strategy and Learning Week, staff designed sessions to share lessons, discuss
emerging questions, and talk about cross-cutting issues.
• Some teams have quarterly meetings to share learnings; these meetings include partners
and consultants.
[34] [12] [1]
CIFF • May start doing an annual evaluation report for the Board, with key findings across the
portfolio.
• One day every quarter is dedicated for senior leaders to review the portfolio and discuss
performance and lessons learned, with a focus on investments with updated evaluations
or those facing decision milestones.
[7] [2]
Hewlett • Holds six in-town weeks per year with two days focused on cross-program learning,
where staff dig into issue areas; sessions sometimes also include external speakers
and grantees. Program and administrative department representatives go on yearlong
rotations to help develop the themes with the organizational learning officer. The Hewlett
president emphasizes how important it is for all staff to attend.
• Speakers (grantees, other external speakers, or program and administrative staff)
come in once or twice a month at lunchtime to talk about an issue area or provide an
update on a strategy. Presentations are posted on the intranet.
• Has considered setting up a cross-foundation Evaluation Community of Practice (with
rotating or standing members).
[5] [4]
California
Wellness
• Sponsors an annual learning and evaluation conference for all organizations with active
grants from the foundation.
[33]
Babcock • Every board meeting includes a learning session on a specific topic; grantees are often
invited to participate.
[27]
Table 5. Sample mechanisms used to share lessons internally within foundations.
14. Planning for Monitoring, Learning, and Evaluation at Small- to Medium-sized Foundations
12
L e a r n i n g
External sharing
Sources listed many reasons to share project information
and evaluation results externally, including attracting
further support for follow-on work and helping to improve
the performance of those projects and organizations.[3]
Still, many foundations remain more focused on internal
learning than external communications. Only 38 percent
of foundations surveyed for a Grantmakers for Effective
Organizations report cited external purposes as being
“very important” in their formal evaluations; this rate had
gone up over time among the smallest organizations but
was unchanged among medium-sized and large ones.[35]
A 2016 report from the Center for Effective Philanthropy
indicated that foundations tend to be more transparent
about their grant-making processes and their goals and
strategies, and less transparent about how they assess
performance or their lessons learned, even though they
think it would be beneficial to do so.[36]
Only 5 percent of
foundations surveyed share lessons they have learned
from projects that have not succeeded.[36]
Some of the
reasons cited for having limited transparency are limited
staff time, Board caution about sharing information,
concerns about information being misunderstood, a
fear of putting grantees at risk, or concerns that sharing
honest information about program challenges could hurt
grantees’ ability to get funds from other donors.[36]
When trying to increase transparency, confidentiality
considerations remain critical; for example, findings may
be sensitive if grantees are working on issues that are
not aligned with government policy in their countries.
Hewlett has a policy of sharing evaluation results so that
others may learn, but making principled exceptions on a
case-by-case basis.[4]
Similarly, CIFF has a “do no harm”
approach that takes precedence over transparency of
evaluation findings—particularly for advocacy programs—
but otherwise makes an effort to share evaluation results
and data widely.[7]
If there are sensitivities, they may only
publish parts of the report, or do a separate external-facing
piece.[7]
Some question the effectiveness of sharing lessons
through a foundation website. The Center for Effective
Philanthropy notes that “statistical analyses show that
providing more information on foundation websites does
not correlate with grantees’ perceptions of their funders’
level of transparency.”[36]
Hewlett puts some of their
evaluations on their website but agrees that it may not be
the ideal way of fully “sharing”; they are looking back at
past evaluations to see what was shared, when, and how.[5]
Early findings can be shared with a smaller group of
advisors, stakeholders, or foundation communications
staff to brainstorm ways to share the evaluation results
more broadly.
Ways to share lessons internally:
• Devote time at staff meetings to reflect on evaluation
topics.
• Have team retreats that focus on learning.
• Build and use a knowledge management system or
dashboard.
• Hold facilitated strategic learning debriefs.[37]
• Have weekly discussions with the third party
evaluation team.[3]
• Schedule an internal debrief at the end of each
evaluation.[4]
• Host brown bag discussions when grantees or experts
come to town.
• DoanannualevaluationreportforBoardandprogram
directors with key findings from across portfolio from
the year – things that are broadly relevant and not
program-specific.[7]
Ways to share lessons externally:
• Put learning topics on the agenda at funder
meetings or events.
• Host roundtable research discussions.
• Put information on the foundation website.
• Use social media.
• Host webinars.
• Publish newsletters or post videos.
• Do presentations (jointly between program
and evaluation teams) at conferences.
15. Planning for Monitoring, Learning, and Evaluation at Small- to Medium-sized Foundations
13
CONCLUSION
T
his review highlights a few key trends. First, foundations are spending
more resources and putting more staff time into evaluation than they
did in the past, and making more of an effort to systematically track
evaluationspending.Second,foundationsareexploringappropriateanduseful
ways to evaluate work done through sub-granting organizations, including by
partnering with those organizations to evaluate sub-grantees, or by helping
to build the internal M&E capacity of those intermediaries. Third, while less
attention has been put on learning to date, recognition of the importance of
purposeful learning is growing quickly. More advances have been made in
internal learning than in sharing lessons with external audiences.
The following are emerging best practices to support effective monitoring,
evaluation, and learning in foundations:
Planning for MEL
Lay out a set of explicit principles to guide monitoring, evaluation, and
learning practices across the foundation or across the program. Base
evaluation frameworks on the concept of testing a strategy or hypothesis, and
plan out evaluations very early in strategy development.
Monitoring
Streamline indicators and monitoring efforts, and ensure that all data that is
collected is collected for a clear purpose.
Evaluation
Review in-house staff skills needed for managing evaluation processes,
and consider building capacity through internal trainings or by forming
an external advisory committee. Use third-party evaluators for most or all
evaluations to help ensure objectivity.
Learning
Consider instituting new practices and procedures to ensure that data and
evaluation findings are consistently used for adaptive management. Involve
foundation communications staff early in conversations about sharing
findings and lessons externally.
C o n c l u s i o n
16. Planning for Monitoring, Learning, and Evaluation at Small- to Medium-sized Foundations
14
[1] M. B. Pearlman, Interviewee, Evaluation and Learning Manager, Packard Foundation. [Interview]. 26 April 2016.
[2] Children’s Investment Fund Foundation (CIFF), “Monitoring and Evaluation Principles and Practices - for partners,” 2015.
[3] W.K. Kellogg Foundation, “W.K. Kellogg Foundation Evaluation Handbook,” 2004.
[4] F. Twersky and K. Lindblom, “Evaluation Principles and Practices: An Internal Working Paper,” The William and Flora
Hewlett Foundation, 2012.
[5] A. Arbreton, Interviewee, Evaluation Officer, The William and Flora Hewlett Foundation. [Interview]. 17 May 2016.
[6] The David and Lucile Packard Foundation, “The Standards: A Guide for Subprogram Strategies,” 2010.
[7] M. Kennedy-Chouane, Interviewee, Evidence, Measurement, and Evaluation Manager, Children’s Investment Fund
Foundation (CIFF). [Interview]. 16 May 2016.
[8] K. B. P. Giudice, “Assessing Performance at the Robert Wood Johnson Foundation: A Case Study,” The City for Effective
Philanthropy, Inc., 2004.
[9] M. C. DeVita, “How Are We Doing? One Foundation’s Efforts to Gauge its Effectiveness,” The Wallace Foundation, 2005.
[10] C. Leisher, Interviewee, Director of Monitoring and Evaluation, The Nature Conservancy. [Interview]. 25 May 2016.
[11] Organizational Research Services, “A Guide to Measuring Advocacy and Policy,” The Annie E. Casey Foundation, Baltimore,
MD, 2007.
[12] The David and Lucile Packard Foundation, “Guidance for Development of MEL Plans,” 2015.
[13] J. Coffman, T. Beer, P. Patrizi and E. H. Thompson, “Benchmarking Evaluation in Foundations: Do We Know What We Are
Doing?,” The Foundation Review, vol. 5, no. 2, pp. 36-51, 2013.
[14] E. O’Connor, “The European Climate Foundation’s monitoring and evaluation journey,” 17 February 2016. [Online].
Available: http://www.efc.be/uncategorized/european-climate-foundations-monitoring-evaluation-journey-shifting-
practice-monitoring-individual-projects-visualising-strategy-progress/. [Accessed 1 June 2016].
[15] K. Putnam, “Measuring Foundation Performance: Examples from the Field,” California Healthcare Foundation, 2004.
[16] Monitor Institute, “M&E Landscape,” The Gordon and Betty Moore Foundation, October 2015.
[17] F. Twersky and A. Arbreton, “Benchmarks for Spending on Evaluation,” The William and Flora Hewlett Foundation, Menlo
Park, CA, 2014.
[18] E. Buteau, Ph.D. and P. Buchanan, “The State of Foundation Performance Assessment: A Survey of Foundation CEOs,” The
Center for Effective Philanthropy, Inc., 2011.
[19] H. Preskill and K. Mack, “Building a Strategic Learning and Evaluation System for Your Organization,” FSG, 2013.
[20] The James Irvine Foundation Evaluation Policies and Guidelines,” The James Irvine Foundation.
[21] S. Parker, “The Shaping of Evaluation at the William and Flora Hewlett Foundation,” The Evaluation Roundtable, 2016.
[22] Center for Evaluation Innovation, “Evaluation in Foundations: 2012 Benchmarking Data,” 2012.
[23] Grantmakers for Effective Organizations, What is a Learning Organization?, 2014.
REFERENCES
R e f e r e n c e s
17. Planning for Monitoring, Learning, and Evaluation at Small- to Medium-sized Foundations
15
[24] M. Tuan, “External Evaluation Advisory Committee Scoping Project: Findings and Recommendations,” The David and
Lucile Packard Foundation, 2012.
[25] The William and Flora Hewlett Foundation, “Outcome Focused Grantmaking: A Hard-Headed Approach to Soft-Hearted
Goals,” 2012.
[26] M. Mountcastle, Interviewee, Board of Directors, Mary Reynolds Babcock Foundation. [Interview]. 19 May 2016.
[27] G. Williams, “Learning and Crafting Strategy at the Mary Reynolds Babcock Foundation,” March 2011. [Online]. Available:
http://mrbf.org/sites/default/files/docs/resources/learningandcraftingstrategy.pdf. [Accessed 6 June 2016].
[28] St. David’s Foundation, “Learning and Evaluation”.
[29] The Walton Family Foundation, “How to Construct Grant Performance Measure (Outputs and Outcomes): A Brief Guide for
Environmental Grant Applicants”.
[30] Children’s Investment Fund Foundation, “Open Evaluation Opportunities,” [Online]. Available: https://ciff.org/evaluation-
rfps/. [Accessed 1 June 2016].
[31] Council on Foundations, “35 Keys to Effective Evaluation,” [Online]. Available: http://www.cof.org/content/35-keys-
effective-evaluation. [Accessed 11 May 2016].
[32] Grantmakers for Effective Organizations, How Can We Embrace a Learning for Improvement Mindset?, 2014.
[33] Grantmakers for Effective Organizations, “Learning Together: Actionable Approaches for Grantmakers,” Washington, DC,
2015.
[34] Grantmakers for Effective Organizations, “Who is Having Success with Learning? The David and Lucile Packard
Foundation,” 15 May 2014. [Online]. Available: http://www.geofunders.org/resource-library/learn-for-improvement/
record/a066000000H2hYkAAJ. [Accessed 16 May 2016].
[35] “Evaluation in Philanthropy: Perspectives From The Field,” Grantmakers for Effective Organizations, 2009.
[36] E. Buteau, J. Glickman, M. Leiwant and C. Loh, “Sharing What Matters: Foundation Transparency,” The Center for Effective
Philanthropy, 2016.
[37] A. Williams, “Evaluation for Strategic Learning: Assessing Readiness and Results,” Center for Evaluation Innovation, 2014.
[38] Children’s Investment Fund Foundation (CIFF), “Every Child Deserves to Survive and Thrive,” [Online].
[39] J. Coffman and T. Beer, “Evaluation to Support Strategic Learning: Principles and Practices,” Center for Evaluation
Innovation, 2011.
[40] The William and Flora Hewlett Foundation, “What We’re Learning,” [Online]. Available: http://www.hewlett.org/what-
were-learning. [Accessed May 2016].
[41] The Walton Family Foundation, “Writing Performance Measures: A Guide for Grant Applicants”.
[42] The Rockefeller Foundation and The Goldman Sachs Foundation, “Social Impact Assessment: A Discussion Among
Grantmakers,” New York City, 2203.
[43] B. Sorenson, Interviewee, Executive Director, KR Foundation. [Interview]. 3 June 2016.
[44] C. Leisher, “Program Evaluation and Monitoring System (PEMS): An Overview for Project Managers - Africa Region (Fourth
Draft),” 2015.
R e f e r e n c e s