This is detailed paper on use of evaluatuations to enhance organisational effectiveness with a case study of Advance Afrika, a Uganda based NGO working on re-integration and economic empowerment of ex-convicts
This was a paper presented to the 12th European Evaluation Society Biennial conference, Maastricht, Netherlands. This paper looks at "Use of Evaluation results to enhance organizational effectiveness Do evaluation findings improve organisational effectiveness?"
This document discusses developing a research agenda for impact evaluation in development. It argues the agenda needs to address more than just causal inference challenges, and should cover all aspects of impact evaluation practice. This includes issues like values clarification, measurement, synthesis, and managing joint projects. The research agenda also needs to recognize development that goes beyond discrete projects to include partnerships and community involvement. Developing the agenda requires consultation, identifying gaps, and reviewing various types of research needed like documenting practice, positive deviance studies, and longitudinal studies. Some example research questions are provided.
This document provides an overview of monitoring and evaluation systems for health programs. It discusses the purpose and value of M&E, including providing evidence for decision making, organizational learning, and accountability. Key concepts around monitoring, evaluation, and operational research are defined. Principles of integrated design, unbiased measurement, and local capacity building for evaluation are covered. The document also presents examples of research questions and indicators for evaluating health programs from various perspectives.
Get your quality homework help now and stand out.Our professional writers are committed to excellence. We have trained the best scholars in different fields of study.Contact us now at premiumessays.net and place your order at affordable price done within set deadlines.We always have someone online ready to answer all your queries and take your requests.
This document outlines a model toolkit for conducting impact evaluations. It discusses key concepts in impact evaluation including definitions of impact, theories of change, causal attribution, and mixed methods approaches. The document proposes an ontological framework to guide impact assessment planning, covering aspects like subject area, target groups, research design, sampling, data collection and analysis methods. It describes experimental, quasi-experimental and non-experimental research designs for addressing causal attribution and achieving credible results. The goal is to integrate monitoring, evaluation and research from the beginning to generate a range of evidence and understand both outcomes and impacts of interventions over time.
The document discusses a vision for using performance measurement to drive organizational change and improvement in healthcare. It summarizes several IOM reports calling for a national performance measurement system. The author argues that such a system needs to take an organizational perspective and account for contextual factors. An organizational model of performance is presented to illustrate how factors like strategy, structure, environment and resources interact. The author suggests organizational research can help by examining these contextual effects, providing implementation roadmaps and aiding assessment of different types of learning and change processes. Key research questions are proposed around how the local context influences implementation and effectiveness.
This document provides an introduction to a study on the impact of monitoring and evaluation (M&E) on employee performance. It discusses how M&E has evolved over time and its importance as a management tool. The study aims to establish how the key activities of M&E planning, training, baseline surveys and information systems influence employee performance. It will focus on M&E implementation at Rreda Estate Limited in Takoradi, Ghana. The objectives, research questions, significance and limitations of the study are also outlined.
This document provides a summary of metaevaluation, including its definition, history, types, models, purpose, process, standards, and checklists. Metaevaluation is defined as the evaluation of an evaluation to assess its quality and adherence to standards. It gained prominence in the late 20th century with the increased focus on educational program effectiveness. There are two types - proactive and retroactive. Key standards for metaevaluation include utility, feasibility, propriety, accuracy, and accountability. The metaevaluation process involves 10 steps including defining questions, collecting information, analyzing adherence to standards, and reporting findings. Checklists are provided to aid in conducting metaevaluations.
This was a paper presented to the 12th European Evaluation Society Biennial conference, Maastricht, Netherlands. This paper looks at "Use of Evaluation results to enhance organizational effectiveness Do evaluation findings improve organisational effectiveness?"
This document discusses developing a research agenda for impact evaluation in development. It argues the agenda needs to address more than just causal inference challenges, and should cover all aspects of impact evaluation practice. This includes issues like values clarification, measurement, synthesis, and managing joint projects. The research agenda also needs to recognize development that goes beyond discrete projects to include partnerships and community involvement. Developing the agenda requires consultation, identifying gaps, and reviewing various types of research needed like documenting practice, positive deviance studies, and longitudinal studies. Some example research questions are provided.
This document provides an overview of monitoring and evaluation systems for health programs. It discusses the purpose and value of M&E, including providing evidence for decision making, organizational learning, and accountability. Key concepts around monitoring, evaluation, and operational research are defined. Principles of integrated design, unbiased measurement, and local capacity building for evaluation are covered. The document also presents examples of research questions and indicators for evaluating health programs from various perspectives.
Get your quality homework help now and stand out.Our professional writers are committed to excellence. We have trained the best scholars in different fields of study.Contact us now at premiumessays.net and place your order at affordable price done within set deadlines.We always have someone online ready to answer all your queries and take your requests.
This document outlines a model toolkit for conducting impact evaluations. It discusses key concepts in impact evaluation including definitions of impact, theories of change, causal attribution, and mixed methods approaches. The document proposes an ontological framework to guide impact assessment planning, covering aspects like subject area, target groups, research design, sampling, data collection and analysis methods. It describes experimental, quasi-experimental and non-experimental research designs for addressing causal attribution and achieving credible results. The goal is to integrate monitoring, evaluation and research from the beginning to generate a range of evidence and understand both outcomes and impacts of interventions over time.
The document discusses a vision for using performance measurement to drive organizational change and improvement in healthcare. It summarizes several IOM reports calling for a national performance measurement system. The author argues that such a system needs to take an organizational perspective and account for contextual factors. An organizational model of performance is presented to illustrate how factors like strategy, structure, environment and resources interact. The author suggests organizational research can help by examining these contextual effects, providing implementation roadmaps and aiding assessment of different types of learning and change processes. Key research questions are proposed around how the local context influences implementation and effectiveness.
This document provides an introduction to a study on the impact of monitoring and evaluation (M&E) on employee performance. It discusses how M&E has evolved over time and its importance as a management tool. The study aims to establish how the key activities of M&E planning, training, baseline surveys and information systems influence employee performance. It will focus on M&E implementation at Rreda Estate Limited in Takoradi, Ghana. The objectives, research questions, significance and limitations of the study are also outlined.
This document provides a summary of metaevaluation, including its definition, history, types, models, purpose, process, standards, and checklists. Metaevaluation is defined as the evaluation of an evaluation to assess its quality and adherence to standards. It gained prominence in the late 20th century with the increased focus on educational program effectiveness. There are two types - proactive and retroactive. Key standards for metaevaluation include utility, feasibility, propriety, accuracy, and accountability. The metaevaluation process involves 10 steps including defining questions, collecting information, analyzing adherence to standards, and reporting findings. Checklists are provided to aid in conducting metaevaluations.
The document discusses evaluation of health programs. It defines evaluation as the systematic acquisition and assessment of information to provide useful feedback. The main goals of evaluation are to influence decision-making and policy formulation through empirically-driven feedback. Formative evaluation assesses needs and implementation, while summative evaluation determines outcomes, impacts, costs and benefits. Evaluation questions, methods, and frameworks are described to establish program merit, worth and significance based on credible evidence from stakeholders. Standards ensure evaluations are useful, feasible, proper and accurate.
Evaluation approaches presented by hari bhusalHari Bhushal
The document discusses evaluation approaches and methods. It defines evaluation as appraising the relevance, efficiency, effectiveness, impacts, and sustainability of plans, policies, programs and projects. Evaluations are used to draw lessons to improve future implementation and hold agencies accountable. The document then discusses different types of evaluations including formative, process, outcome and economic evaluations. It also outlines various evaluation approaches like appreciative inquiry, beneficiary assessment, case studies, contribution analysis, developmental evaluation, and participatory evaluation.
The document describes the Knowledge-To-Action Cycle, which consists of an Action Cycle and Knowledge Funnel. The Action Cycle is a 7-phase process for implementing knowledge to create planned changes. It involves identifying knowledge gaps, adapting knowledge to context, assessing barriers, selecting interventions, monitoring use, evaluating outcomes, and sustaining use. The Knowledge Funnel distills knowledge through inquiry, synthesis, and creating tools/products for end-users.
Untangling some challenges and opportunities in water research on the African continent today – with focus on domestic and agricultural use
Presentation: Stella Williams,
Agricultural Economist, Professor
Obafemi Awolowo University, Ile Ife, Osun State, Nigeria
The International Forum on Water and Food (IFWF) is the premier gathering of water and food scientists working on improving water management for agricultural production in developing countries.
The CGIAR Challenge Program for Water and Food (CPWF) represents one of the most comprehensive investments in the world on water, food and environment research.The Forum explores how the CPWF research-for-development (R4D) approach can address water and food challenges through a combination of process, institutional and technical innovations.
Supporting paper for NPT Master Class 'Getting ideas into Practice: normalising implementation of complex interventions across the healthcare system' - Collaborating for Better Care Partnership Master Class 23rd October 2014
A toolkit for complex interventions and health technologies using normalizati...Normalizationprocess
The document introduces Normalization Process Theory (NPT), a conceptual model for evaluating the implementation and integration of new health technologies and complex interventions. NPT focuses on the work done by individuals and groups to embed interventions in practice. The NPT Toolkit provides managers, clinicians and researchers with a simplified framework based on NPT to assess implementation processes. It includes questions related to coherence, participation, action and appraisal, and allows users to gauge these implementation factors using a visual interface. The toolkit is meant as an aid for critical thinking, not a validated measurement instrument.
This document discusses evaluating complex interventions. It begins by defining evaluation and complex interventions, noting their uncertainty and non-linear relationships. It then outlines several approaches to evaluating complex interventions, including logic analysis, realistic evaluation, and contribution analysis. These approaches emphasize understanding the intervention's theory of change, context, and embracing uncertainty through multiple perspectives. The document concludes by noting these approaches all focus on a program's theory, context, and ensuring a valid and rigorous research process.
This document discusses evaluation methodology for practices in science communication. It begins by noting the lack of systematic evaluation has made it difficult to compare practices, develop theories, and ensure accountability. The author argues for developing a common evaluation language while acknowledging the diversity of science communication. A key challenge is that practices have diverse purposes and actors. The author proposes using program theory and logic models to systematically evaluate practices in an ex post facto manner. This involves practitioners describing the purposes and means of a practice after completion to facilitate evaluation. The discussion considers how to account for change and complexity in program theories. The goal of developing evaluation is to improve practices for public benefit rather than administrative control.
This document outlines the presentation on evaluating a national health programme. It discusses key topics like monitoring versus evaluation, the history and purpose of evaluation, different types of evaluation including formative, summative and participatory evaluation. The document details the evaluation process including planning evaluations, gathering baseline data, implementing evaluations and using evaluation results. It also covers standards for effective evaluation including ensuring the utility, feasibility, propriety and accuracy of evaluations. The overall summary is that the document provides an overview of best practices for conducting program evaluations of national health initiatives.
This document discusses various usability evaluation methods for assessing the effectiveness, efficiency and satisfaction of users interacting with a system. It covers metrics like completion rates, errors and satisfaction questionnaires. Inspection methods like heuristic evaluation and cognitive walkthroughs are outlined. User-based evaluations involve usability testing with tasks and measures of success/failure rates, time on task and errors. Remote and lab studies, eyetracking and card sorting are also summarized. The document provides guidance on planning evaluations through defining goals, users, tasks and data collection.
1. The document presents a novel monitoring system to track the activities, partnerships, and resource allocations of large research organizations over time.
2. The system collects data from individual researchers about their collaborations over the past 12 months to aggregate into a description of the organization's portfolio and engagement with other actors.
3. This information is important for research planning and management, as it can show how efforts are actually allocated compared to budgets, and identify immediate effects of incentives or structural changes on researcher activities.
NPT is a framework for thinking about implementing interventions by focusing on how interventions can become part of everyday practice through different groups working together. It involves using four sets of questions to identify potential barriers to successfully implementing an intervention and proposing solutions to improve the implementation process.
Van der vleuten_-_twelve_tips_for_programmatic_assessmentcnmcmeu
This document provides 12 tips for implementing programmatic assessment. Programmatic assessment aims to optimize assessment's learning, decision-making, and quality assurance functions by purposefully choosing individual assessments and aggregating information across assessments. The tips include: developing a master assessment plan aligned with the curriculum; promoting feedback over pass/fail decisions for individual assessments; and adopting a robust electronic portfolio system to collect and aggregate assessment information.
This document provides a strategic advocacy framework to help organizations like Chintan monitor and evaluate their advocacy efforts. It recommends that Chintan develop a theory of change to integrate its programs, goals, and mission. The framework includes defining goals and interim outcomes and tracking activities. Monitoring and evaluation can help Chintan understand what is effective, adapt strategies, and demonstrate progress. However, advocacy can be difficult to evaluate due to shifting timelines and strategies. The document provides recommendations for Chintan to plan advocacy in its organizational context and become a learning organization that regularly reviews lessons from its work.
The document provides an overview of Santiago Arzubi's architectural portfolio from 2012-2016. It includes 18 residential and commercial projects designed through his firm AR Architects. The portfolio shows floor plans, renderings, and pictures from projects like the Las Tecas building, SG house, DC house, and S.O.S office that demonstrate Arzubi's focus on designing for his clients' needs and forming relationships that go beyond the architectural plans.
El documento resume la historia de los ordenadores y la informática desde sus orígenes hasta la actualidad. Comenzó con las primeras máquinas mecánicas de calcular en el siglo XVII y avanzó a través de cinco generaciones de ordenadores, desde los primeros electrónicos basados en válvulas hasta los ordenadores personales y portátiles de hoy en día.
Este documento proporciona instrucciones para editar imágenes en línea utilizando el editor Pixlr, incluyendo cómo cargar una imagen, aplicar filtros de color y formato, recortar, mover texto sobre la imagen, ajustar con una cuadrícula y guardar los cambios realizados. El editor Pixlr es similar a Photoshop pero más sencillo de usar para usuarios no expertos y se recomienda para cambiar tonos, tamaños de imágenes y trabajar con imagen y texto.
Este documento describe la diferencia entre la web superficial y la web profunda. La web superficial incluye sitios cuyos contenidos pueden ser indexados por buscadores y recuperados mediante búsquedas. La web profunda incluye información almacenada en bases de datos que no es accesible a través de buscadores, como contenido opaco, privado, propietario e invisible.
La estudiante Elizabeth Joachín Navarrete de Contaduría en el 3er semestre presenta un informe con un índice, gráficas de una encuesta y su opinión personal sobre la asignatura de Computación Básica, concluyendo con su perspectiva sobre la materia.
The document discusses evaluation of health programs. It defines evaluation as the systematic acquisition and assessment of information to provide useful feedback. The main goals of evaluation are to influence decision-making and policy formulation through empirically-driven feedback. Formative evaluation assesses needs and implementation, while summative evaluation determines outcomes, impacts, costs and benefits. Evaluation questions, methods, and frameworks are described to establish program merit, worth and significance based on credible evidence from stakeholders. Standards ensure evaluations are useful, feasible, proper and accurate.
Evaluation approaches presented by hari bhusalHari Bhushal
The document discusses evaluation approaches and methods. It defines evaluation as appraising the relevance, efficiency, effectiveness, impacts, and sustainability of plans, policies, programs and projects. Evaluations are used to draw lessons to improve future implementation and hold agencies accountable. The document then discusses different types of evaluations including formative, process, outcome and economic evaluations. It also outlines various evaluation approaches like appreciative inquiry, beneficiary assessment, case studies, contribution analysis, developmental evaluation, and participatory evaluation.
The document describes the Knowledge-To-Action Cycle, which consists of an Action Cycle and Knowledge Funnel. The Action Cycle is a 7-phase process for implementing knowledge to create planned changes. It involves identifying knowledge gaps, adapting knowledge to context, assessing barriers, selecting interventions, monitoring use, evaluating outcomes, and sustaining use. The Knowledge Funnel distills knowledge through inquiry, synthesis, and creating tools/products for end-users.
Untangling some challenges and opportunities in water research on the African continent today – with focus on domestic and agricultural use
Presentation: Stella Williams,
Agricultural Economist, Professor
Obafemi Awolowo University, Ile Ife, Osun State, Nigeria
The International Forum on Water and Food (IFWF) is the premier gathering of water and food scientists working on improving water management for agricultural production in developing countries.
The CGIAR Challenge Program for Water and Food (CPWF) represents one of the most comprehensive investments in the world on water, food and environment research.The Forum explores how the CPWF research-for-development (R4D) approach can address water and food challenges through a combination of process, institutional and technical innovations.
Supporting paper for NPT Master Class 'Getting ideas into Practice: normalising implementation of complex interventions across the healthcare system' - Collaborating for Better Care Partnership Master Class 23rd October 2014
A toolkit for complex interventions and health technologies using normalizati...Normalizationprocess
The document introduces Normalization Process Theory (NPT), a conceptual model for evaluating the implementation and integration of new health technologies and complex interventions. NPT focuses on the work done by individuals and groups to embed interventions in practice. The NPT Toolkit provides managers, clinicians and researchers with a simplified framework based on NPT to assess implementation processes. It includes questions related to coherence, participation, action and appraisal, and allows users to gauge these implementation factors using a visual interface. The toolkit is meant as an aid for critical thinking, not a validated measurement instrument.
This document discusses evaluating complex interventions. It begins by defining evaluation and complex interventions, noting their uncertainty and non-linear relationships. It then outlines several approaches to evaluating complex interventions, including logic analysis, realistic evaluation, and contribution analysis. These approaches emphasize understanding the intervention's theory of change, context, and embracing uncertainty through multiple perspectives. The document concludes by noting these approaches all focus on a program's theory, context, and ensuring a valid and rigorous research process.
This document discusses evaluation methodology for practices in science communication. It begins by noting the lack of systematic evaluation has made it difficult to compare practices, develop theories, and ensure accountability. The author argues for developing a common evaluation language while acknowledging the diversity of science communication. A key challenge is that practices have diverse purposes and actors. The author proposes using program theory and logic models to systematically evaluate practices in an ex post facto manner. This involves practitioners describing the purposes and means of a practice after completion to facilitate evaluation. The discussion considers how to account for change and complexity in program theories. The goal of developing evaluation is to improve practices for public benefit rather than administrative control.
This document outlines the presentation on evaluating a national health programme. It discusses key topics like monitoring versus evaluation, the history and purpose of evaluation, different types of evaluation including formative, summative and participatory evaluation. The document details the evaluation process including planning evaluations, gathering baseline data, implementing evaluations and using evaluation results. It also covers standards for effective evaluation including ensuring the utility, feasibility, propriety and accuracy of evaluations. The overall summary is that the document provides an overview of best practices for conducting program evaluations of national health initiatives.
This document discusses various usability evaluation methods for assessing the effectiveness, efficiency and satisfaction of users interacting with a system. It covers metrics like completion rates, errors and satisfaction questionnaires. Inspection methods like heuristic evaluation and cognitive walkthroughs are outlined. User-based evaluations involve usability testing with tasks and measures of success/failure rates, time on task and errors. Remote and lab studies, eyetracking and card sorting are also summarized. The document provides guidance on planning evaluations through defining goals, users, tasks and data collection.
1. The document presents a novel monitoring system to track the activities, partnerships, and resource allocations of large research organizations over time.
2. The system collects data from individual researchers about their collaborations over the past 12 months to aggregate into a description of the organization's portfolio and engagement with other actors.
3. This information is important for research planning and management, as it can show how efforts are actually allocated compared to budgets, and identify immediate effects of incentives or structural changes on researcher activities.
NPT is a framework for thinking about implementing interventions by focusing on how interventions can become part of everyday practice through different groups working together. It involves using four sets of questions to identify potential barriers to successfully implementing an intervention and proposing solutions to improve the implementation process.
Van der vleuten_-_twelve_tips_for_programmatic_assessmentcnmcmeu
This document provides 12 tips for implementing programmatic assessment. Programmatic assessment aims to optimize assessment's learning, decision-making, and quality assurance functions by purposefully choosing individual assessments and aggregating information across assessments. The tips include: developing a master assessment plan aligned with the curriculum; promoting feedback over pass/fail decisions for individual assessments; and adopting a robust electronic portfolio system to collect and aggregate assessment information.
This document provides a strategic advocacy framework to help organizations like Chintan monitor and evaluate their advocacy efforts. It recommends that Chintan develop a theory of change to integrate its programs, goals, and mission. The framework includes defining goals and interim outcomes and tracking activities. Monitoring and evaluation can help Chintan understand what is effective, adapt strategies, and demonstrate progress. However, advocacy can be difficult to evaluate due to shifting timelines and strategies. The document provides recommendations for Chintan to plan advocacy in its organizational context and become a learning organization that regularly reviews lessons from its work.
The document provides an overview of Santiago Arzubi's architectural portfolio from 2012-2016. It includes 18 residential and commercial projects designed through his firm AR Architects. The portfolio shows floor plans, renderings, and pictures from projects like the Las Tecas building, SG house, DC house, and S.O.S office that demonstrate Arzubi's focus on designing for his clients' needs and forming relationships that go beyond the architectural plans.
El documento resume la historia de los ordenadores y la informática desde sus orígenes hasta la actualidad. Comenzó con las primeras máquinas mecánicas de calcular en el siglo XVII y avanzó a través de cinco generaciones de ordenadores, desde los primeros electrónicos basados en válvulas hasta los ordenadores personales y portátiles de hoy en día.
Este documento proporciona instrucciones para editar imágenes en línea utilizando el editor Pixlr, incluyendo cómo cargar una imagen, aplicar filtros de color y formato, recortar, mover texto sobre la imagen, ajustar con una cuadrícula y guardar los cambios realizados. El editor Pixlr es similar a Photoshop pero más sencillo de usar para usuarios no expertos y se recomienda para cambiar tonos, tamaños de imágenes y trabajar con imagen y texto.
Este documento describe la diferencia entre la web superficial y la web profunda. La web superficial incluye sitios cuyos contenidos pueden ser indexados por buscadores y recuperados mediante búsquedas. La web profunda incluye información almacenada en bases de datos que no es accesible a través de buscadores, como contenido opaco, privado, propietario e invisible.
La estudiante Elizabeth Joachín Navarrete de Contaduría en el 3er semestre presenta un informe con un índice, gráficas de una encuesta y su opinión personal sobre la asignatura de Computación Básica, concluyendo con su perspectiva sobre la materia.
El documento presenta una introducción al concepto de calidad de software. Define la calidad de software como un proceso eficaz que crea un producto útil que agrega valor tanto para el productor como para el usuario final. Describe varias dimensiones e índices de calidad como la funcionalidad, confiabilidad, usabilidad y eficiencia según la norma ISO 9126. Explica que lograr la calidad de software requiere de métodos de ingeniería de software, gestión de proyectos, control de calidad y aseguramiento de la calidad. También cubre la gest
The document discusses why mobile presence is important for businesses, introduces Hooduku's cross-platform mobile development framework that allows creating native apps using web technologies, and highlights that some apps developed with this framework are already live while inviting the reader to contact Hooduku for more information.
La variable tecnológica como ventaja competitiva para la empresapantonyerivera
Este documento habla sobre la gerencia y negociación de tecnología. Explica que la transferencia de tecnología en Colombia se puede lograr a través de mecanismos directos como licencias y venta de tecnología, e indirectos como inversión extranjera. También discute los principales aspectos que deben cubrirse en una negociación tecnológica, como regalías, duración del contrato, propiedad de mejoras, y resolución de disputas. El objetivo final es lograr un acuerdo mutuamente beneficioso entre el
This document provides project profiles for several construction projects completed by the company. It includes details on:
- The refurbishment of the Gouritz Rail Bridge in South Africa from 2010-2011.
- The Ambatovy nickel refinery project in Madagascar from 2006-2012, which was successfully completed.
- The expansion of the Tenke Fungurume copper mine in the Democratic Republic of Congo from 2010-2012, also completed successfully.
- The Nacala corridor railway project in Malawi from 2013-2014 to construct a railway, also completed successfully.
- An ongoing project in Durban to reconstruct berths at the Maydon Wharf from 2014-2016,
Доклад заместителя начальника управления Макуровой И.А. на конференции 02.03....Юлия Золотухина
Результаты работы Территориального управления Министерства социального развития Пермского края по Чайковскому муниципальному району в 2015 году.
Доклад заместителя начальника управления И.А. Макуровой
The field of program evaluation presents a diversity of images a.docxcherry686017
The field of program evaluation presents a diversity of images and claims about the nature and role of evaluation that confounds any attempt to construct a coher- ent account of its methods or confidently identify important new developments. We take the view that the overarching goal of the program evaluation enterprise is to contribute to the improvement of social conditions by providing scientifically credible information and balanced judgment to legitimate social agents about the effectiveness of interventions intended to produce social benefits. Because of its centrality in this perspective, this review focuses on outcome evaluation, that is, the assessment of the effects of interventions upon the populations they are intended to benefit. The coverage of this topic is concentrated on literature published within the last decade with particular attention to the period subsequent to the related reviews by Cook and Shadish (1994) on social experiments and Sechrest & Figueredo (1993) on program evaluation.
The word ‘evaluation’ has become increasingly used in the language of community, health and social services and programs. The growth of talk and practice of evaluation in these fields has often been promoted and encouraged by funders and commissioners of services and programs. Following the interest of funders, has been a growth in the study and practice of evaluation by community, health and social service practitioners and academics. When we consider why this move in evaluative thinking and practice has occurred, we can assume the position of the funder and simply answer, ‘...because we want to know if this program or service works’. Practitioners, specialists and academics in these fields have been called upon by governments and philanthropists to aid the development of effective evaluation. Over time, they have led their own thinking and practice independently. Evaluation in its simplest form is about understanding the effect and impact of a program, service, or indeed a whole organization. Evaluation as a practice is not so simple however, largely because in order to assess impact, we need to be very clear at the beginning what effect or difference we are trying to achieve.
The literature review begins with an overview of qualitative and quantitative research methods, followed by a description of key forms of evaluation. Health promotion evaluation and advocacy and policy evaluation will then be explored as two specific domains. These domains are not evaluation methodologies, but forms of evaluation that present unique requirements for effective community development evaluation. Following this discussion, the review will explore eight key evaluation methodologies: appreciative enquiry, empowerment evaluation, social capital,
social return on investment, outcomes based evaluation, performance dashboards and scorecards and developmental evaluation. Each of these sections will include specific methods, the values base of each methodo ...
This document discusses interactive evaluation, which involves participants playing a major role in setting goals, delivery, and evaluation. It aims to provide systematic evaluation findings to help organizations continuously improve their programs. Key approaches to interactive evaluation include responsive evaluation, action research, quality review, developmental evaluation, and empowerment evaluation. The overall goal is to integrate evaluation into the daily processes of organizations to help them become more effective and efficient.
This document discusses evaluation principles, processes, components, and strategies for evaluating community health programs. It begins by defining evaluation and explaining that the community nurse evaluates community responses to health programs to measure progress towards goals and objectives. The evaluation process involves assessing implementation, short-term impacts, and long-term outcomes. Key components of evaluation include relevance, progress, cost-efficiency, effectiveness, and outcomes. The document then describes various evaluation strategies like case studies, surveys, experimental design, monitoring, and cost-benefit/cost-effectiveness analyses and how they can be useful for evaluation.
Program Evaluation: Forms and Approaches by Helen A. CasimiroHelen Casimiro
This document discusses different forms and approaches to program evaluation. It describes five forms of evaluation: 1) Proactive Evaluation which occurs before program design to synthesize knowledge for decisions, 2) Clarificative Evaluation which occurs early in a program to document essential dimensions, 3) Participatory/Interactive Evaluation which occurs during delivery to involve stakeholders, 4) Monitoring Evaluation which occurs over the life of an established program to check progress, and 5) Impact Evaluation which assesses the effects of a settled program. It also outlines several evaluation approaches including behavioral objectives, four-level training outcomes, responsive, goal-free, and utilization-focused evaluations.
CHAPTER SIXTEENUnderstanding Context Evaluation and MeasuremeJinElias52
CHAPTER SIXTEEN
Understanding Context: Evaluation and Measurement in Not-for-Profit Sectors
Dale C. Brandenburg
Many individuals associated with community agencies, health care, public workforce development, and similar not-for-profit organizations view program evaluation akin to a visit to the dentist’s office. It’s painful, but at some point it cannot be avoided. A major reason for this perspective is that evaluation is seen as taking money away from program activities that perform good for others, that is, intruding on valuable resources that are intended for delivering the “real” services of the organization (Kopczynski & Pritchard, 2004). A major reason for this logic is that since there are limited funds available to serve the public good, why must a portion of program delivery be allocated to something other than serving people in need? This is not an unreasonable point and one that program managers in not-for-profits face on a continuing basis.
The focus of evaluation in not-for-profit organization has shifted in recent years from administrative data to outcome measurement, impact evaluation, and sustainability (Aspen Institute, 2000), thus a shift from short-term to long-term effects of interventions. Evaluators in the not-for-profit sector view their world as the combination of technical knowledge, communication skills, and political savvy that can make or break the utility and value of the program under consideration. Evaluation in not-for-profit settings tends to value the importance of teamwork, collaboration, and generally working together. This chapter is meant to provide a glimpse at a minor portion of the evaluation efforts that take place in the not-for-profit sector. It excludes, for example, the efforts in public education, but does provide some context for workforce development efforts.
CONTRAST OF CONTEXTS
Evaluation in not-for-profit settings tends to have different criteria for the judgment of its worth than is typically found in corporate and similar settings. Such criteria are likely to include the following:
How useful is the evaluation?
Is the evaluation feasible and practical?
Does the evaluation hold high ethical principles?
Does the evaluation measure the right things, and is it accurate?
Using criteria such as the above seems a far cry from concepts of return on investment that are of vital importance in the profit sector. Even the cause of transfer of training can sometimes be of secondary importance to assuring that the program is described accurately. Another difference is the pressure of time. Programs offered by not-for-profit organizations, such as an alcohol recovery program, take a long time to see the effects and, by the time results are viewable, the organization has moved on to the next program. Instead we often see that evaluation is relegated to measuring the countable, the numbers of people who have completed the program, rather than the life-changing impact that decreased alcohol abuse has on ...
The Utilization of DHHS Program Evaluations: A Preliminary ExaminationWashington Evaluators
Washington Evaluators Brown Bag
by Andrew Rock and Lucie Vogel
October 5, 2010
The presentation will describe a study conducted by the Lewin Group on the utilization of program evaluations in the Department of Health and Human Services for the Assistant Secretary for Planning and Evaluation. The study used an online survey of project officers and managers from a sample of program evaluations selected from the Policy Information Center database. To supplement the survey data, Lewin conducted focus groups with senior staff in six agencies. Key findings of the study focused on direct, conceptual and indirect use and the importance of high quality methods, stakeholder involvement in evaluation design, presence of a champion, and study findings that were perceived to be important. The study concluded with recommendations for a strengthened internal evaluation group within HHS and future research using a case study approach for greater in-depth examination.
Mr. Andrew Rock initiated/conceived and was the Project Officer (COTR) for the study. He works for the Office of Planning and Policy Support in the Office of the Assistant Secretary for Planning and Evaluation (ASPE), HHS. He is responsible for the Department's annual comprehensive report to Congress on HHS evaluations, coordinates the HHS legislative development process, represents his office on the Continuity of Operations Workgroup, and has worked on various cross-cutting issues including homelessness, tribal self-governance, and health reform. In addition to his work in ASPE, he has worked at the Centers for Medicare and Medicaid Services, the Public Health Service, and the Office of the National Coordinator for Health Information Technology.
Ms Lucie Vogel served as a Stakeholder Committee Member for the study. She works in the Division of Planning, Evaluation and Research in the Indian Health Service, developing Strategic and Health Service Master Plans, conducting evaluation studies, and reporting on agency performance. She previously served in evaluation and planning positions in the Food Safety and Inspection Service, the Virginia Department of Rehabilitative Services, the University of Virginia, and the Wisconsin Department of Health and Social Services.
This presentation has a vivid description of the basics of doing a program evaluation, with detailed explanation of the " Log Frame work " ( LFA) with practical example from the CLICS project. This presentation also includes the CDC framework for evaluation of program.
N.B: Kindly open the ppt in slide share mode to fully use all the animations wheresoever made.
1
Stakeholder Involvement In Evaluation Planning
Student Name
Institution Name
Course Number
Due Date
Faculty Name
Topic: Stakeholder Involvement In evaluation Planning
Stakeholders are the people that are at stake on the evaluation. They are individuals that have interest in or are impacted by evaluation and its results. I would consider involving stakeholders in health program planning. Stakeholders have the ability to provide ideas and aidin the creation of potential solutions (Ferreira,et al., 2020). In most cases stakeholders are from various backgrounds; they therefore look at issues from various perspectives.this allows opposing viewpoints to be expressed and also discussed. Engaging stakeholders from the planning stage, maximizes the chance of project success through the final execution. They may as well aid in preventing unforeseen problems (Michnej, & Zwolinski, 2018). They have a great influence on the community of animal lovers, thus it is imperative to have an advocate instead of an adversary.
I would consider facilitating stakeholder’s involvement through maintaining open communication. The stakeholders need to be updated on the organization’s core purpose. It is essential to be consistent in the messages, and use them to show employees how they fit in the plan as well as how their contributions have aided in shaping the decisions made (Smith, 2017). Individuals that know what is expected as well as how they contribute tend to be more engaged and committed in comparison to those that do not. It is essential to ensure that the stakeholders know where they fit in. engaging employees in the planning process aids in building ownership in the firm.
References
Ferreira, V., Barreira, A. P., Loures, L., Antunes, D., & Panagopoulos, T. (2020). Stakeholders’ engagement on nature-based solutions: A systematic literature review. Sustainability, 12(2), 640.
Michnej, M., & Zwoliński, T. (2018). The role and responsibility of stakeholders in the planning process of the sustainable urban mobility in the city Krakow. Transport Economics and Logistics, 80, 159-167.
Smith, P. A. (2017). Stakeholder engagement framework. Information & Security, 38, 35-45.
TOPIC: Strategies and Ethics
As the director of the local public health department, you are preparing to conduct a town hall presentation. In it you will communicate the direction of the strategic plan. Your audience will include collaborative partners (invested stakeholders) such as academicians, health professionals, state health department staff, representatives from affected communities, and representatives from nongovernmental organizations.
Recall that your Stakeholder Involvement in Evaluation Planning discussion in Unit 5 reviewed the planning and evaluation cycle (Figure 11-1 in your textbook). In addition, in that discussion you explained where in the cycle and how you would seek stakeholder involvement in evaluation planning. The town hall presentation is on ...
This document discusses developing logic models to focus program evaluations. It defines logic models and their components, and provides an example logic model for an education program to prevent HIV infection. Logic models describe the resources, activities, outputs, and short- and long-term outcomes of a program, helping evaluators design focused evaluation questions. The document emphasizes engaging stakeholders in developing the logic model and determining the evaluation's purpose and questions.
Introduction to Development Evaluation 发展评价导言Dadang Solihin
Shanghai International Program for Development Evaluation Training Asia-Pacific Finance and Development Center; 200 Panlong Road-Shanghai, October 9, 2008
This document provides guidance on evaluating nutrition initiatives. It outlines key steps to developing an evaluation framework, including: defining objectives; selecting process, outcome and impact indicators; and choosing appropriate data collection methods. The summary should evaluate the intervention, not just describe it. An effective evaluation demonstrates the value of the initiative and whether objectives were achieved.
Analysis of Performance Appraisal Systems on Employee Job Productivity in Pub...inventionjournals
Universities appraisal system is meant to enhance the performance of employees by integrating an individual’s goal with those of the organization. Despite the Universities Management having an appraisal system, performance in public universities in the country remains relatively poor. The purpose of the study was to analyze performance appraisal systems on employee job productivity in public universities. The main objective of the study was to determine the effect of self-assessment on the performance of employees in Public Universities. The research study was carried out in four universities namely Masinde Muliro University of Science and Technology, Maseno, Moi and Jaramogi Oginga Odinga University of Science and Technology. Data collection instruments used was mainly questionnaire. Both content and construct reliability was carried out through engagement of experts in preparing the questionnaire. Piloting was done in Laikipia University College, though the results were not used in the study. To ensure that the instrument is reliable, a Cronbach’s Alpha of Coefficient of 0.876, was attained, which is far way above the recommended 0.7 in social sciences. The study employed descriptive survey research design. The target population consisted of 11,296 employees and 4 Registrars in charge of Administration. Purposive sampling was used to select the four universities and four registrars. Data analysis was done using the statistical Package for Social Science (Version 20). Both descriptive and inferential statistics were used in data analysis. The results were presented in form of tables, charts and cross tabulations. From the findings, self-assessment was an important section in performance appraisal as it contributed to improvement in employee job productivity. The findings will contribute to the pool of knowledge in the field of Human Resource Management and will form the basis of reference by interested parties in future. The management of public universities will use the findings of this study to guide them in performance management. Furthermore, the findings will be a source of reference for academicians who intend to carry out studies in relation to the subject of performance appraisal systems.
programme evaluation by priyadarshinee pradhanPriya Das
This document discusses concepts, needs, goals and tools related to program evaluation. It defines evaluation as a systematic process to determine the merit, worth and significance of a program or intervention using set standards and criteria. The primary purposes of evaluation are to gain insight and enable reflection to identify future changes. Some key goals of program evaluation include improving program design, assessing progress towards goals, and determining effectiveness and efficiency. Common tools for program evaluation discussed include interviews, observations, questionnaires, and case studies.
Methods Of Program Evaluation. Evaluation Research Is OfferedJennifer Wood
This document discusses different approaches to evaluation research and program evaluation. It provides examples of different types of evaluation research, such as problem analysis, evidence-based policy, and evidence generation. It also discusses publication bias in medical informatics evaluation research and evaluates the training evaluation process for a dinner event. Key aspects of performance evaluations and the challenges associated with the performance evaluation process are outlined as well. Different participant-oriented approaches to evaluation like participatory evaluation, developmental evaluation, and empowerment evaluation are also presented.
This document discusses organizational diagnosis and organizational development. It defines organizational diagnosis as examining an organization to determine gaps between current and desired performance. Organizational development aims to improve organizational effectiveness through planned interventions. The document outlines facets of diagnosis including processes, models, and methods. It also discusses OD practitioner competencies and styles, as well as common intervention techniques like team building, surveys, and structural changes. Prerequisites for effective OD include management commitment, communication, resources and using a systematic diagnostic process.
Definations for Learning 24 July 2022 [Autosaved].pptxInayatUllah780749
1. M&E definitions provide explanations of key terms like monitoring, evaluation, and different types of evaluations such as formative, process, outcome, impact, and summative evaluations.
2. Different types of evaluations occur at various stages of a project and serve different purposes, such as improving project implementation, assessing progress, or evaluating overall impact.
3. Evaluating coherence considers how well a project's internal components and external partnerships support its goals, highlighting the importance of synergies within and beyond the project.
Assessing and improving partnership relationships and outcomes a proposed fr...Emily Smith
This document proposes a framework for assessing partnership relationships and outcomes. The framework aims to: 1) improve partnership practice as programs are implemented, 2) refine and test hypotheses about how partnerships contribute to performance, and 3) provide lessons for future partnerships. The proposed assessment approach is continuous, participatory, and developmental. It measures compliance with partnership success factors, the degree of partnership practices, partnership outcomes, partner performance, and efficiency. The framework integrates process and institutional factors into performance measurement to provide a more holistic view of how partnerships function and contribute to outcomes.
This annotated compendium of evaluation planning guides can help you understand the basics of conducting an evaluation; learn how to create a logic model and indicators; understand evaluation terminology; develop performance management metrics; and evaluate your research, knowledge translation and commercialization activities, outputs and outcomes.
The document discusses project evaluation and recycling. It provides information on key concepts related to monitoring, evaluation, and the project life cycle. Some main points:
- Monitoring is the routine collection and use of data to assess progress towards objectives. Evaluation assesses activities designed to achieve tasks in a specified period of time.
- There are different types of evaluation that can be done at various stages, including formative, summative, and impact evaluations. Internal evaluations are done by project staff while external evaluations involve outside parties.
- Effective evaluation assesses outcomes, impacts, efficiency, effectiveness, and relevance. It utilizes tools like reports, surveys, and reviews. The results are then used to update project plans and determine
1
4
Milestone 4
Student’s Name
University Affiliation
Southern New Hampshire University
Milestone 4
Description of the Initiative Evaluation Plan
Initiative evaluation involves systematic mechanisms for gathering, reviewing, and utilizing information to answer questions concerning the initiative, policies, and programs, specifically about their effectiveness and efficiency. Initiative evaluation can entail both qualitative as well as qualitative techniques of social research. The initiative evaluation plan also contains the intended use of the evaluation outcomes for the program enhancement and decision making. The evaluation plan serves to clarify the initiative’s purpose and expected results (Dudley, 2020). The evaluation plan provides the direction that the monitoring should take based on the initiative priorities, the available resources, time, and skills required to complete the evaluation.
The initiative will have a well-documented plan to foster transparency as well as ensure that stakeholders are on a similar page with concerns about the purpose, use, and also the beneficiaries of the evaluation outcomes. Utilization of the evaluation outcomes is not a thing that can be wished when implementing an initiative. Instead, it must be planned, directed, and ensured to have intentions (Dudley, 2020). The evaluation plan for this initiative will have many benefits, including facilitating the capacity to establish strong connections with partners and stakeholders. The program is also essential for creating the initiative transparency to the stakeholders and decision-makers. The plan also serves as advocacy means for evaluation resources based on negotiated priorities. The procedure for evaluation initiative is also critical for helping in identifying whether there are enough intervention resources and time to realize the desired evaluation exercises and provide answers to prioritize evaluation questions.
When developing the plan for evaluating the initiative targeting to promote health and wellbeing in the community, the key steps must be to develop an effective strategy. The key steps to be followed when creating the evaluation plan differ depending on the project type to be evaluated. The first step entails engaging the stakeholders. When finding the purpose of the evaluation procedures, it is crucial to determine its purpose and the stakeholders involved in the implementation process of the intervention. Identifying the purpose of the evaluation process and stakeholders involved is critical because the two components serve as the basis for evaluation planning, target, design, and comprehension of the outcomes. Stakeholders' engagement is necessary to enable the support of the evaluation process. Involving stakeholders in the evaluation process can have many advantages. Stakeholders comprise the people who use the evaluation outcomes, support and keep the initiative or those impacted by the intervention activities or evalu ...
Similar to Do evaluations improve organisational effectiveness (20)
Often times we are confronted with the situation that we must select program/project beneficiaries. and it is very possible that you can have a wrong beneficiary for a right intervention or a right beneficiary for the wrong intervention. But, the intention usually is to have a good match. I undertook a process that seemed simple but gave us amazing results. We were able to have a list of program beneficiaries that was not disputed. We developed a questionnaire which was administered to 505 potential beneficiaries, from who we selected the targeted 440. A weighted score was the basis to identify who should benefit
This document explains the weights and the weighted scores
The document discusses the Monitoring, Evaluation and Learning (MEL) wheel, which consists of 6 elements: 1) Design, 2) Collect, 3) Store, 4) Analyze, 5) Apply, and 6) Share. It provides details on each element, such as what data to collect, how to store and analyze data, how to apply findings, and how to disseminate lessons learned. The purpose of the MEL wheel is to help organizations strengthen their MEL systems in order to win donor confidence through effective monitoring, evaluation and learning.
The document discusses the Monitoring, Evaluation and Learning (MEL) wheel, which consists of 6 elements: 1) Design, 2) Collect, 3) Store, 4) Analyze, 5) Apply, and 6) Share. It provides definitions for monitoring, evaluation and learning. For each element of the MEL wheel, it describes the key questions that need to be addressed such as what data to collect, how to store and analyze the data, how to apply the findings, and how to disseminate lessons learned. The overall goal of the MEL system is to help organizations improve their programs and win donor confidence through effective monitoring, evaluation and learning.
This document discusses evaluation use and its impact on organizational effectiveness. It defines monitoring and evaluation, explaining why evaluations are conducted and what constitutes evaluation use. The document then presents a case study of an evaluation of a youth entrepreneurship project in Uganda. Key recommendations from the evaluation were implemented, leading to the development of two new projects. Finally, the document discusses factors that influence evaluation use, including organizational culture and structure, evaluation quality, and external pressures. It concludes that involving stakeholders and allowing organizations to learn from failures can maximize evaluation use and its benefits for organizational learning and improvement.
This document discusses factors that influence whether evaluations improve organizational effectiveness. It defines evaluation and discusses different types of evaluation use, including instrumental, conceptual, and process use. Process use is seen as most likely to enhance effectiveness by facilitating learning and changes in behavior. The document presents a case study of an evaluation of an ex-inmates reintegration project that was subsequently utilized to improve the project design and inform two new projects. Key factors influencing evaluation utilization include quality of the evaluation, organizational support, and external environment. Quality entails stakeholder participation, timely evaluation, and credible evidence.
Jennifer Schaus and Associates hosts a complimentary webinar series on The FAR in 2024. Join the webinars on Wednesdays and Fridays at noon, eastern.
Recordings are on YouTube and the company website.
https://www.youtube.com/@jenniferschaus/videos
Indira awas yojana housing scheme renamed as PMAYnarinav14
Indira Awas Yojana (IAY) played a significant role in addressing rural housing needs in India. It emerged as a comprehensive program for affordable housing solutions in rural areas, predating the government’s broader focus on mass housing initiatives.
Jennifer Schaus and Associates hosts a complimentary webinar series on The FAR in 2024. Join the webinars on Wednesdays and Fridays at noon, eastern.
Recordings are on YouTube and the company website.
https://www.youtube.com/@jenniferschaus/videos
UN WOD 2024 will take us on a journey of discovery through the ocean's vastness, tapping into the wisdom and expertise of global policy-makers, scientists, managers, thought leaders, and artists to awaken new depths of understanding, compassion, collaboration and commitment for the ocean and all it sustains. The program will expand our perspectives and appreciation for our blue planet, build new foundations for our relationship to the ocean, and ignite a wave of action toward necessary change.
United Nations World Oceans Day 2024; June 8th " Awaken new dephts".Christina Parmionova
The program will expand our perspectives and appreciation for our blue planet, build new foundations for our relationship to the ocean, and ignite a wave of action toward necessary change.
AHMR is an interdisciplinary peer-reviewed online journal created to encourage and facilitate the study of all aspects (socio-economic, political, legislative and developmental) of Human Mobility in Africa. Through the publication of original research, policy discussions and evidence research papers AHMR provides a comprehensive forum devoted exclusively to the analysis of contemporaneous trends, migration patterns and some of the most important migration-related issues.
Jennifer Schaus and Associates hosts a complimentary webinar series on The FAR in 2024. Join the webinars on Wednesdays and Fridays at noon, eastern.
Recordings are on YouTube and the company website.
https://www.youtube.com/@jenniferschaus/videos
Do evaluations improve organisational effectiveness
1. Use of Evaluation results to enhance organizational effectiveness: Do evaluation findings
improve organisational effectiveness?
(EES16-0070)
Author1 Innocent K. Muhumuza1,
1 Planning, Monitoring and Evaluation, Caritas Switzerland, Kampala, Uganda;
Abstract
The purpose of this paper is to highlight the important factors to consider in designing and
implementing Evaluations that improve program effectiveness (the extent to which a
project or programme is successful in achieving its objectives).
Specifically, this paper looks at defining the term evaluation and utilisation, the types of
use, factors influencing utilisation and presents a case study of utilisation.
Keywords
Designing, Participation, ownership, utilization, improved organizational effectiveness
Defining evaluation and why evaluations are conducted in organisations.
In the current times of projects and programs management, the terms monitoring and
evaluation (M&E) have tended to be taken as synonymous. However, there is a clear
distinction between the two terms. Even when the two contribute to enhancing
organisational effectiveness, they answer distinct project management questions and
different institutions and scholars define evaluations differently. This paper focuses on
whether evaluations improve organisational effectiveness.
In the Organisation for Economic Cooperation and Development (OECD) glossary of key
terms (OECD, 2002) evaluation is defined as “The systematic and objective assessment of an
2. on-going or completed project, programme or policy, its design, implementation and results.
The aim is to determine the relevance and fulfillment of objectives, development efficiency,
effectiveness, impact and sustainability”.
The UNDP defines evaluation (UNDP, 2002) as “A time-bound exercise that attempts to
assess systematically and objectively the relevance, performance and success of ongoing and
completed programmes and projects”
Whereas monitoring is the systematic collection and analysis of information as a project
progresses, evaluation is the comparison of actual project impacts against the agreed
strategic plans (Shapiro, 2010).
In light of the above, then one would question the rationale for conducting evaluations.
It is worthwhile to note that progressively, there has been growing interest and demand for
evaluation for decades among both public and Non-Governmental organisations in part due
to increased demand for accountability by the donors and also the quest to learn from
experiences.
The United Nations office on Drugs and crime (UNODC) advances Learning and
Accountability as the two reasons for conducting evaluation. This is presumed that it will
improve planning and delivery of interventions and decision making based on findings,
recommendations and lessons learned and provide objective and up to date evidence of
what UNODC has achieved and what impact has been produced using the resources
provided, evaluation also aims at accounting for the use of resources and for the results
produced.
The UNDP in its glossary of terms (UNDP, 2002), also points out that the aim of evaluation is
to determine the relevance and fulfillment of objectives, development efficiency,
effectiveness, impact and sustainability. An evaluation should provide information that is
credible and useful, enabling the incorporation of lessons learned into the decision–making
process of both recipients and donors. This is also a position that is held by the OECD.
The broader purpose of evaluation is to construct and provide judgements about facts and
values to guide choice and action (Dasgupta, 2001). Like Dasgupta, it is argued that
Monitoring and evaluation is a procedure of knowledge generation, self-assessment, and
joint action in which stakeholders in a program collaboratively define the evaluation issues,
collect and analyze data, and take action as a result of what they learn through this process
(Jackson and Kassam, 1998).
The Centers for Disease Control (CDC) recognizes that Program staff may be pushed to do
evaluation by external mandates from funders, authorizers, or others or they may
be pulled to do evaluation by an internal need to determine how the program is performing
and what can be improved. This seems to imply that evaluations may not necessarily be
conducted as a matter of need by the primary users, in this cases the managers and
implementers but because it is demanded by funders and authorizers.
3. The Development Assistance Committee (DAC) stipulates that in conducting an evaluation, it
should answer questions on Relevance, Efficiency, Effectiveness, Impact and Sustainability of
interventions, projects, program, and policies.
The varying definitions of evaluation and reasons for conducting evaluations point to the fact
that evaluations when utilised can actually improve effectiveness in implementation of
projects/programs that would ultimately result in organisational effectiveness. However, this
is only possible if there is willingness to use the evaluation as an improvement tool.
Understanding Evaluation Use in developing projects:
From the different definitions of evaluation and varying reasons why evaluations are
conducted, it is implied that evaluations should stimulate action. It is also an expectation
within evaluators that their work will be useful to policy makers, program managers and
other stakeholders to solve social problems. It is also argued that society justifies spending
large amounts of money for evaluations with the expectation that there will be immediate
payoffs and so if evaluations are not useful then the funds should be expended for
alternative use (Shadish, Cook and Levinton, 1991). The argument by Shadish et al is in line
with the focus of this paper-The utilisation of evaluations to improve effectiveness. The key
question here is; do the evaluations actually stimulate action? The answer(s) to this question
lead to analysing whether evaluations are utilised [used] or not. The term “Evaluation
utilisation” is used interchangeably with “Evaluation use” in this paper
Different scholars have had different perspectives on utilisation [use] of evaluations.
One way to look at evaluation utilisation is; evaluation utilisation as the application of
evaluation processes, products or findings to produce an effect (Johnson, Greenseid et al,
2009).
Evaluation use also concerns how real people in the real world apply evaluation findings and
experience and learn from the evaluation process (Patton, 2013).
Evaluation use is also looked at as “the way in which an evaluation and information from the
evaluation impacts the program being evaluated” (Alkin and Taut, 2003).
Whereas there has been effort to explicitly define the term evaluation utilisation, the use of
evaluations has been seen to be in different ways. It may not matter what ways evaluation
are used as long as the use results in enhanced organisational effectiveness. There are a
number of ways of evaluation use that are generally found in literature. There is
Instrumental, process, conceptual, symbolic, legitimisation, interactive, and enlightenment
use.
Instrumental use: when decision makers use the evaluation findings to modify the object of
the evaluation (i.e. the evaluand) in some way (Shulha & Cousins, 1997). Simply put, this is
the direct action that occurs as a result of an evaluation.
Conceptual use: when the findings of an evaluation help program staffs understand the
program in a new way (Weiss, 1979). This could be something newly understood about a
program, its operations, participants or outcomes through the evaluation. This also implies
that evaluation may not result into direct action but influences understanding.
4. Enlightment use: when the evaluation findings add knowledge to the field and so may be
used by any one, not just those involved with the program or evaluation of the program
(Weiss, 1979).
Symbolic use: Occurs when an organisation establishes an evaluation unit or undertakes an
evaluation study to signal that they are good managers. The actual functions of the
evaluation unit or the evaluation’s findings are of limited importance aside from the “Public
relations value”. The organisation or individuals use the mere existence of evaluations and
not any aspects of the results to persuade or to convince.
Legitimisation use: the evaluation is used to justify current views, interests, policies or
actions. The purpose of evaluation is not to find answers to unanswered questions or find
solutions but just provide support of opinions or decisions already made.
Process use: this occurs when individual changes in thinking and behaviour and
program/organisations change procedures and culture among those involved in evaluation
as a result of learning that happens during the evaluation process (Patton, 1997). Process
use is defined as “....cognitive, behavioural, program and organisational changes resulting
from engagement in the evaluation process and thinking evaluatively” (Patton, 2003).
Process use incorporates features from instrumental use, enlightment and conceptual use.
Evaluation Utilisation in practice:
When looking at an organisation that uses evaluation to improve her effectiveness, then
conceptual, instrumental and process uses are better placed to enhance effectiveness.
However, looking at the three uses, process use stands out as what would help organisations
improve her effectiveness because it integrates most features from conceptual and
instrumental use and goes beyond to look at changes in behaviour, and cognitive abilities as
a result of engaging in an evaluation which ultimately influence how organisations work.
Embedded in process use that is critical to enhancing organisational effectiveness is Learning
and applying the learning from the evaluation processes. Taking a case study, here below,
one will notice how evaluations can actually enhance organisational effectiveness.
The Case study presented in this paper depicts an evaluation whose results were applied to
program development, an illustration that indeed evaluations can be utilised to enhance
organisational effectiveness.
Case study: Evaluation of Re-integration of Ex-inmates, Advance Afrika-Uganda
The project Design and Evaluation
Advance Afrika (AA) piloted the Youth Entrepreneurship Enhancement Project (YEEP) that
was implemented together with the Uganda Prisons Service (UPS). The project aimed at
improving the livelihoods of vulnerable youth in Northern Uganda in the districts of Gulu,
Lira and Kitgum, which generally serve the Lango and Acholi sub-regions. The project was in
response to the vulnerability of youth in Northern Uganda having been marginalized by
growing up in internally displaced people’s (IDP) camps or as former abductees of the rebel
group the Lord’s Resistance Army, with a majority lacking formal education and therefore
unskilled and or disabled, thus lie idle in the labour market. The project specifically targeted
her interventions at youth ex-convicts aged 18-35years in the three districts; focusing on
5. equipping the ex-convicts with entrepreneurship skills to generate their income, the project
has potential to improve their quality of the life. The project was implemented through
prison staff and University youth facilitators at Gulu university
An evaluation was conducted at the end of the one year pilot phase, effective from March
2014 –February 2015 to assess the relevance, efficiency, sustainability and impact of the
project. The rationale of the evaluation is to document the experiences: successes,
challenges, opportunities for further growth and lessons learnt during the pilot phase in
order to improve the current project design.
Participants in the evaluations
In the evaluation, staff of Advance Afrika, Uganda Prisons Service and Ex-convicts
participated. Staff participated as implementers and respondents, UPS staff participated as
co-implementers and secondary stakeholders, University youths facilitators as secondary
stakeholders and Ex-convicts as beneficiaries and primary stakeholders. The evaluation was
facilitated by an external evaluator and staff participated as respondents and the analysis of
the responses was involved both the staff and the evaluator
Key findings and recommendations
The project evaluation singled out eight critical areas for improvement. They are;
Training: Conducting refresher training for all UPS trainers with practical sessions including
face to face interaction with interactions with those with expertise in the field of
entrepreneurship to clarify emerging issues. Also increase the duration of the training and
integrate assessment of individual learners’ abilities.
Wider stakeholders buy-in: Encourage the trained social workers to extend the training to
the wider prison staff in order to strengthen the good will for the project.
Strengthening Advocacy: Strengthen media publicity and conduct Targeted and realistic
advocacy with specific demands by getting irrefutable evidence
Follow-up of ex-convicts: UPS to follow-up the entrepreneurship development as part and
parcel of its ordinary course of duty and ensure that UPS has competent social workers.
Training Manual Improvement: UPS management directly is responsible for the review of the
Manual so as to strengthen their commitment to learning and internal capacities for
sustainability.
Monitoring and Evaluation: Advance Afrika and its partners to develop a standard reporting
guide for the project so that they do not waste time reporting on non-issues, while glossing
on key indicators of success.
Project targeting: The project to keep youth outside the project boundaries at minimum in
order to optimize the results for Northern Uganda
6. Internal strengthening: The evaluation recommended trainings like Community Based
Performance Monitoring training was also recommended to build capacities of both Advance
Afrika and key implementers’
Utilisation of the results:
The evaluation of the YEEP culminated into of a two year project “Advancing Youth
Entrepreneurship project (AYE)-2015-2016” and a three year project “Social Reintegration
and Economic Empowerment of Youths (2016-2018)”. Of the eight evaluation findings, their
implementation was spread across the two projects. In the two resultant projects, the
training manual was revised, the duration of trainings extended from five days to ten days,
an online M&E system accessible to all stakeholders developed, the project geographical
scope was extended to include more prison units in more districts in Lango and Acholi sub
regions, planned for training more social workers, Refresher trainings planned for, use of
radio as a platform for advocacy and awareness creation was adopted and staff assigned
specific case load of ex-convicts to follow up. The development of AYE and SREE informed by
the results of the YEEP is a demonstration of how an evaluation can be used to enhance
organisational effectiveness
Understanding effectiveness of organisations
The OECD defines effectiveness as the extent to which the development intervention’s
objectives were achieved, or are expected to be achieved; taking into account their relative
importance (OECD, 2002) and the Development Assistance committee (DAC) identifies
Effectiveness as one of the evaluation criterion.
For an evaluation to be utilised (or not) to enhance organisational effectiveness, we cannot
deny the fact that there are enabling (disabling) factors. These vary from quality of the
evaluation, organisational, external, technological relational and environmental factors.
Sandison (2005) identifies four factors influencing utilization of evaluations that include,
quality, organizational, relational and external factors. Preskill et al (2003) identify five
factors that influence evaluations. They include organization characteristics, management
support, advisory group characteristics, facilitation of the evaluation process and frequency,
methods and quality of communication.
In this paper, I look at three broadly categorized factors that influence the utilization of
evaluations to enhance organizational effectiveness. They are Quality factors, Organizational
and external factors.
Quality factors: Adapting Sandison categorization, these relate to the purpose and design of
the evaluation, the planning and timing of the evaluation, dissemination and credibility of
the evidence (Sandison 2005). It is a fact that the need and audiences for evaluations change
over time and so there is not such a thing as “one size for all”. Williams et al (2002) alludes
to the same thought when he says “…..one size does not fit all and each purpose privileges
different users”. Patton (1997) also argues that the purpose, approach, methodology and
presentation of an evaluation should derive from the intended use by the intended user.
Implied in this is that an evaluation should be tailored to meet the specific needs of the use
7. i.e. to meet the intended use by the intended user. This therefore, also demands careful
thinking and selection of stakeholders, determining their level of participation in the
evaluation process and their interests in the evaluation. This is critical for ensuring
ownership of the results and influences their use. This view is supported by Williams et al
(2002) when he says “…active participation of stakeholders at all stages of the evaluation
cycle promotes use”.
Planning for evaluation including the timing, partly determines how deep stakeholders will
participate in the evaluation processes, how much of their time they will need to devote to
the evaluation and how timely is the evaluation to meet their current and future needs. If
the evaluation will take a great amount of the stakeholders’ time it is very likely that there
will be partial participation or no participation at all. Also, if there is no perceived
importance of how the evaluation meets the stakeholders’ current or future needs, then
there is little compelling them to participate in the evaluation process and therefore there
will be limited or no attachment to the evaluation results. Another aspect that cannot be
underestimated in planning for an evaluation is the timing of the evaluation. Specific in this
is the time when an evaluation starts, when it is completed and the results made available.
Often times, evaluations are not utilized If the results are made available long after the key
decisions have been made.
Dissemination and credibility of evidence: when completed, it is important that the results of
an evaluation are shared with the different stakeholders. It is also key to note that different
media for dissemination appeal differently to different stakeholders and so the evaluator
must pay particular attention to the medium and content of the dissemination (for instance
through team discussions, workshops, management meetings). At dissemination, it is when
the stakeholders validate the evidence (and its quality) of evaluation. Where the evidence is
questionable, the chances of utilization are reduced. The evidence should be credible, well
researched, objective and expert and the report itself should be concise, and easy to read
and comprehend (Sandison, 2005). The quality of evidence is judged by; its accuracy,
representativeness, relevance, attribution, generalisability and clarity around concepts and
methods (Clarke et al, 2014). If the evidence is of poor quality, data used is doubted, and the
recommendations perceived irrelevant, the evaluation can in no way be utilized. Feasible,
specific, targeted, constructive and relevant recommendations promote use (Sandison,
2005). Credibility of the evidence is also dependent competence and reputation of the
evaluator-these define the evaluator credibility. In a situation that the credibility of the
evaluator is questionable, then, no doubt the evidence is questionable and so will not be
taken serious by the project teams.
Organizational factors: the different constituent components of organizations can in one
way or another influence the utilization of evaluations. These components include policies,
budgets, structure, systems including process and knowledge management systems and
staff. Sandison (2005) identifies culture, structure and knowledge management as the
organizational factors that influence utilization.
In organization culture, Sandison (2005) looks at a culture of learning and argues that for a
learning organization, senior managers encourage openness to scrutiny and change,
transparency and embed learning mechanisms. Also staff members value evaluation and
8. have some understanding of the process. Performance is integral to working practice,
managers actively support staff to learn and the organization’s leaders promote and reward
learning. Implied in this is that organizations should be open to sharing, willing to
experiment and improve. But, it is also important to note that learning occurs at different
levels i.e. collectively at organizational level and/or individually at a personal level. It is
imperative for the managers and organizational leaders to avail avenues that facilitate the
learning-sharing, doing, reflection and improvement. These could be formal e.g. seminars or
informal e.g. breakfast tables. In absence of a learning culture, chances are high that the
evaluations will remain on table.
Structure: Organizations, over time, due to increasing demand for Monitoring and
evaluations, have incorporated M&E department or unit to support the evaluation function
in their formal structures. It is important though that there is a good connection or linkage
between the different departments/units e.g. communications and documentation, finance,
advocacy, fundraising and the M&E department or Unit. This also requires that the M&E
staffs are linked to key decision makers in the different departments for purposes of getting
the decision makers to act or push their teams to act on evaluations. This is well put by
Sandison (2005), “…..the evaluation unit is structurally linked to senior decision makers,
adequately resourced, and competent. There are clear decision making structures,
mechanisms and lines of authority in place. Vertical and horizontal links between managers,
operational staff and policy makers enable dissemination and sharing learning. These are
permanent opportunist mechanisms for facilitating organization wide involvement and
learning”. Where organizational operations are highly decentralized with field offices, it
remains important that the M&E staff be part of meetings with directors and senior
management. The structural set-up in the organization can enable or completely disable the
utilization of the evaluations. M&E staff should be competent; the M&E unit should be
adequately resourced (finance, human and technological resources)
Systems: organizations have varying systems to support their operations. Among the
systems, an organization should have an M&E system that also allows for sharing, learning,
and accountability. This means that dissemination and knowledge management should be
deliberate and well planned for. Sandison (2005) argues that there should be systematic
dissemination mechanisms, informal and formal knowledge sharing networks and systems.
Where dissemination of an evaluation happens because the evaluator is pre-conditioned to
do so as a requirement for the completion of an evaluation and not as a requirement of the
organizational learning culture, then it will be no surprise that the evaluation will not be
utilized.
Policies: A policy is a deliberate system of principles to guide decisions and achieve rational
outcomes (Wikipedia). With the increased institutionalisation of M&E, some organisations
and government departments have gone further to develop M&E policy to guide the
operations and practice in the organisation. The policy also institutionalises evidence based
decision making which indirectly demands that evaluations are utilize. Where such policies
exist and there is good will of the top management to implement the policy requirements,
then evaluations have very high chances of being utilised.
9. Budgets: Evaluations or broadly M&E require budget for implementation like other project
interventions. This calls for budgeting for utilisation in the advent that some of the
recommendation cannot be integrated in the current interventions. If such provision is not
there (as it is sometimes), then the evaluation will selectively be implemented or not at all.
Organisations often plan for the execution of the evaluation and not the implementation of
the recommendations.
External factors: external pressure on organization or commissioners of evaluations may
have an influence on the utilization of evaluations. Such pressure may manifest from the
donors, professional bodies/associations, project beneficiaries, and the need to protect
reputation and funding.
With the increasing professionalization of the evaluation practice, regional and national
associations have been formed and instituted standards for evaluations. The evaluation
standards can be seen to have an influence on the utilization of evaluations and
subsequently enhance (or not) the effectiveness of organizations. The African Evaluation
Association stipulated four principles for evaluation i.e. utility, feasibility, precision and
quality, respect and equity (AfrEA, 2006). The Uganda Evaluation Association considers five
standards, namely; utility, feasibility, quality and precision, ethical conduct and capacity
development (UEA, 2013). The American Joint committee on standards for educational
(UJCSEE, 1994) evaluation considers four evaluation standards and these are Utility,
feasibility, propriety, and accuracy. What appears a “constant” standard is the utility
standard. It emphasizes that the evaluation should serve the information needs of the
intended users. Implied in this is that the design of the evaluation should bear in mind the
intended users and the intended use of the evaluation. The Utility standards are intended to
ensure that an evaluation will serve the information needs of the intended user (Sanders,
1994). Though this is the expected, it may not guarantee the use of the results of the
evaluation.
Project beneficiaries, are a constituent force in the external environment of projects that
have an influence over the success or the failure of projects. Similarly, their role in the
utilization (or non-use) of evaluations cannot be overlooked, of course not as direct users.
With the increasing shift from traditional evaluation approaches to more participatory
approaches, beneficiaries are often involved in evaluations but minimally are the results
communicated to them. HAPI (2006) notes that humanitarian agencies are perceived to be
good at accounting to official donors, fairly good at accounting to private donors and host
governments and very weak in accounting to beneficiaries. The increased demand for
accountability among beneficiaries puts organizations at task to demonstrate so through
evaluations but also pinning the organizations to respond to the emerging issues in the
evaluation. Where the beneficiaries have a passive role to play in project and evaluation
implementation, there are slim chances of influencing the use of the evaluation results.
Organisations, big or small, national or local, young or new are increasingly competing for
the financial resources from the donor agencies. This implies submission to the donor
requirements in application for funds and religiously following and completion of the
application form that also includes a section of project monitoring and evaluation. This
10. creates a scenario that organisations must demonstrate a solid approach to M&E for them to
win the application. This implies that in this case, an evaluation may be designed and
implemented for donor accountability and not to meet the needs of the organisation.
Organisations are also keen to protect their reputation and funding in the face of evaluations
that they would not want to lose funding and face as a result of publicised criticism. The fear
of repercussions from publicised criticism is real and so organisations are more determined
to protect their reputation and so the rush is not towards recommendations but the image
of the organisation. But, it is also true that some in some cases funding decisions are not
based on funding but donor mandates and other factors. National Auditing Office (2005)
affirms that among some donors effectiveness is just one of a number of issues to consider
when deciding who to fund. In light of the fact that performance is just a factor among
others, an evaluation where it is perceived to pose no threat on funding streams, then their
results are of no consequence.
Conclusions:
Quality factors
The expectation, generally, is that when an evaluation is conducted then the results will be
appealing and so compelling to apply. However, this not always automatic. Even when
different authors have different perceptions on use and the factors that influence use, no
single factor can solely influence utilisation of the results.
In an evaluation that will increase utilisation, then planning for an evaluation, stakeholder
participation and credibility of the evidence is paramount. From the case study, one will
notice that the staff of the organisation participated in the evaluation more than just mere
respondents. This builds credibility of the evidence generated and ownership of the results
since analysis was jointly done facilitated by an external person. This probably explains why
the evaluation findings were wholesomely utilised.
Organisational Factors:
Whereas it is good to have well structured organisations, with policies and protocols, care
must be taken on how these could impact on the learning and “doing” culture of the
organisation. In the case study, the organisation is less structured with no clearly designated
M&E function. This could mean that, there are no visible barriers to the learning and doing
culture. It could also imply the structural barriers to utilisation of evaluations. For instance,
in a highly structured organisation, it is possible that executions of ideas goes through levels
and in a case where not all parties have an equal voice, then it is possible that the ideas of
the stronger (respected) voice will take the day.
External Factors
The increasing role of the donor, professional bodies and the beneficiaries cannot be
overlooked in how well evaluations are utilised. In the case study, the organisation is fairly
young with a self mounted pressure to demonstrate how well the organisation can meet her
objectives so as to win donor confidence. One would rightly say the external and internal
pressure to compel the organisation to grow bigger is a reason why the evaluation was
wholly utilised.
11. Acknowledgement
I am greatly indebted to Ms Kathrin Wyss the Program Delegate and Stefan Roesch the
Junior Program Officer, Caritas Switzerland-Uganda for allowing me to take time within the
busy work schedule and the constant interest in supporting the preparation and review of
this paper.
Mr Ronald Rwankangi and the team of Advance Afrika thank you for the constructive
interaction that fed into this paper.
And finally, special thanks to Dr. Christ Kakuba (Makerere University, Kampala) and Mr.
Geoffrey Babughirana (World Vision Ireland) for the expertise, time and guidance provided
to make this paper worth reading.
Bibliography
1. Alkin, M.C and Taut, S.M: Unbundling evaluation use, 2003
2. Dasgupta, P.S.: Human Well Being and the Natural Environment, 2001
3. Fleischer, D. N., Christine, C. A: Results from a survey of U.S. American Evaluation
Members, June 2009.
4. Forss, K., Rebien, C. C., Carlsson, K.: Process Use of Evaluations. Types of use that
Precede Lessons Learned and Feedback, 2002.
5. Jackson, E. T .and Kassam, Y.: Knowledge shared: Participatory evaluation in
Development cooperation. Journal of Multidisciplinary Evaluation, 1998.
6. Johnson, K., Greeseid, L., O., Toal S., A., King J., A., Lawrenz, F., Volkov, B.: Research
on Evaluation use-a review of the empirical literature from 1986-2005, 2009
7. National Audit Office: Report by the National Audit Office. Engaging with
Multilaterals, December, 2006.
8. Patton, M. Q: Utilisation-Focused evaluation, 1997
9. Patton, M. Q.: Utilization Focuses Evaluations in Africa, September 1999.
10. Preskill, H., Zuckerman, B., Mathews, B.: An exploratory study of proces use. Findings
and implications for future research. American Journal of Evaluation, 2003.
11. Sanders J. R: The program Evaluation Standards. Joint Committee on Standards for
Educational Evaluation, 1994
12. Sandison, P.: The Utilisation of Evaluations, 2005.
13. Shadish, R. R., Cook, T.D & Levinton, L.C.L: Foundations of program evaluation, 1991
14. Shapiro, J.: Monitoring and Evaluation, 2010
15. Shulha, L.M., and Cousins, J.B: Evaluation use: Theory, Research and Practice since
1986, 1997
16. Weiss, C.H: The many meanings of research utilization, 1979
17. www.alnap.org/resource/10441
18. www.cdc.gov/eval/guide/introduction/
19. www.unodc.org/unodc/en/evaluation/why-we-evaluate.html