This document provides a summary of metaevaluation, including its definition, history, types, models, purpose, process, standards, and checklists. Metaevaluation is defined as the evaluation of an evaluation to assess its quality and adherence to standards. It gained prominence in the late 20th century with the increased focus on educational program effectiveness. There are two types - proactive and retroactive. Key standards for metaevaluation include utility, feasibility, propriety, accuracy, and accountability. The metaevaluation process involves 10 steps including defining questions, collecting information, analyzing adherence to standards, and reporting findings. Checklists are provided to aid in conducting metaevaluations.
This document discusses factors that influence whether evaluations improve organizational effectiveness. It defines evaluation and discusses different types of evaluation use, including instrumental, conceptual, and process use. Process use is seen as most likely to enhance effectiveness by facilitating learning and changes in behavior. The document presents a case study of an evaluation of an ex-inmates reintegration project that was subsequently utilized to improve the project design and inform two new projects. Key factors influencing evaluation utilization include quality of the evaluation, organizational support, and external environment. Quality entails stakeholder participation, timely evaluation, and credible evidence.
This was a paper presented to the 12th European Evaluation Society Biennial conference, Maastricht, Netherlands. This paper looks at "Use of Evaluation results to enhance organizational effectiveness Do evaluation findings improve organisational effectiveness?"
Untangling some challenges and opportunities in water research on the African continent today – with focus on domestic and agricultural use
Presentation: Stella Williams,
Agricultural Economist, Professor
Obafemi Awolowo University, Ile Ife, Osun State, Nigeria
The International Forum on Water and Food (IFWF) is the premier gathering of water and food scientists working on improving water management for agricultural production in developing countries.
The CGIAR Challenge Program for Water and Food (CPWF) represents one of the most comprehensive investments in the world on water, food and environment research.The Forum explores how the CPWF research-for-development (R4D) approach can address water and food challenges through a combination of process, institutional and technical innovations.
This document outlines a model toolkit for conducting impact evaluations. It discusses key concepts in impact evaluation including definitions of impact, theories of change, causal attribution, and mixed methods approaches. The document proposes an ontological framework to guide impact assessment planning, covering aspects like subject area, target groups, research design, sampling, data collection and analysis methods. It describes experimental, quasi-experimental and non-experimental research designs for addressing causal attribution and achieving credible results. The goal is to integrate monitoring, evaluation and research from the beginning to generate a range of evidence and understand both outcomes and impacts of interventions over time.
Meta Evaluation and Evaluation Dissemination(Manual 3)Dawit Wolde
This document provides an overview of meta evaluation and evaluation dissemination. It discusses the historical origin and basic concepts of meta evaluation, including defining meta evaluation and recognizing its evolution. It also outlines the presentation, which will cover standards of evaluation, guiding principles of evaluators, types of meta evaluators, evaluation dissemination, evaluation reports, and evaluation use. Finally, it lists the training methods that will be used, including interactive lectures, group discussions, and plenary presentations over 20 hours.
1. The quality of USAID evaluation reports has generally improved over 2009-2012, with improvements in factors like use of multiple data collection methods and identification of limitations. However, some factors did not improve.
2. USAID evaluation reports excelled in basic characteristics but fell short in factors like distinguishing findings from conclusions/recommendations and including an evaluation specialist.
3. Overall quality of USAID evaluations was moderately high but could be improved by increasing involvement of evaluation specialists and guidance on new standards.
The document discusses evaluation of health programs. It defines evaluation as the systematic acquisition and assessment of information to provide useful feedback. The main goals of evaluation are to influence decision-making and policy formulation through empirically-driven feedback. Formative evaluation assesses needs and implementation, while summative evaluation determines outcomes, impacts, costs and benefits. Evaluation questions, methods, and frameworks are described to establish program merit, worth and significance based on credible evidence from stakeholders. Standards ensure evaluations are useful, feasible, proper and accurate.
This document discusses factors that influence whether evaluations improve organizational effectiveness. It defines evaluation and discusses different types of evaluation use, including instrumental, conceptual, and process use. Process use is seen as most likely to enhance effectiveness by facilitating learning and changes in behavior. The document presents a case study of an evaluation of an ex-inmates reintegration project that was subsequently utilized to improve the project design and inform two new projects. Key factors influencing evaluation utilization include quality of the evaluation, organizational support, and external environment. Quality entails stakeholder participation, timely evaluation, and credible evidence.
This was a paper presented to the 12th European Evaluation Society Biennial conference, Maastricht, Netherlands. This paper looks at "Use of Evaluation results to enhance organizational effectiveness Do evaluation findings improve organisational effectiveness?"
Untangling some challenges and opportunities in water research on the African continent today – with focus on domestic and agricultural use
Presentation: Stella Williams,
Agricultural Economist, Professor
Obafemi Awolowo University, Ile Ife, Osun State, Nigeria
The International Forum on Water and Food (IFWF) is the premier gathering of water and food scientists working on improving water management for agricultural production in developing countries.
The CGIAR Challenge Program for Water and Food (CPWF) represents one of the most comprehensive investments in the world on water, food and environment research.The Forum explores how the CPWF research-for-development (R4D) approach can address water and food challenges through a combination of process, institutional and technical innovations.
This document outlines a model toolkit for conducting impact evaluations. It discusses key concepts in impact evaluation including definitions of impact, theories of change, causal attribution, and mixed methods approaches. The document proposes an ontological framework to guide impact assessment planning, covering aspects like subject area, target groups, research design, sampling, data collection and analysis methods. It describes experimental, quasi-experimental and non-experimental research designs for addressing causal attribution and achieving credible results. The goal is to integrate monitoring, evaluation and research from the beginning to generate a range of evidence and understand both outcomes and impacts of interventions over time.
Meta Evaluation and Evaluation Dissemination(Manual 3)Dawit Wolde
This document provides an overview of meta evaluation and evaluation dissemination. It discusses the historical origin and basic concepts of meta evaluation, including defining meta evaluation and recognizing its evolution. It also outlines the presentation, which will cover standards of evaluation, guiding principles of evaluators, types of meta evaluators, evaluation dissemination, evaluation reports, and evaluation use. Finally, it lists the training methods that will be used, including interactive lectures, group discussions, and plenary presentations over 20 hours.
1. The quality of USAID evaluation reports has generally improved over 2009-2012, with improvements in factors like use of multiple data collection methods and identification of limitations. However, some factors did not improve.
2. USAID evaluation reports excelled in basic characteristics but fell short in factors like distinguishing findings from conclusions/recommendations and including an evaluation specialist.
3. Overall quality of USAID evaluations was moderately high but could be improved by increasing involvement of evaluation specialists and guidance on new standards.
The document discusses evaluation of health programs. It defines evaluation as the systematic acquisition and assessment of information to provide useful feedback. The main goals of evaluation are to influence decision-making and policy formulation through empirically-driven feedback. Formative evaluation assesses needs and implementation, while summative evaluation determines outcomes, impacts, costs and benefits. Evaluation questions, methods, and frameworks are described to establish program merit, worth and significance based on credible evidence from stakeholders. Standards ensure evaluations are useful, feasible, proper and accurate.
This document discusses developing a research agenda for impact evaluation in development. It argues the agenda needs to address more than just causal inference challenges, and should cover all aspects of impact evaluation practice. This includes issues like values clarification, measurement, synthesis, and managing joint projects. The research agenda also needs to recognize development that goes beyond discrete projects to include partnerships and community involvement. Developing the agenda requires consultation, identifying gaps, and reviewing various types of research needed like documenting practice, positive deviance studies, and longitudinal studies. Some example research questions are provided.
This document discusses challenges in evaluating human rights progress and techniques that can help. It notes both benefits and drawbacks to measuring results, and challenges like long timeframes and attribution. A theory-driven approach is recommended to identify pathways and indicators to measure short-term outcomes contributing to long-term goals. Gathering diverse feedback, proxies for data, and transparency are also advised. Ongoing learning approaches focus on understanding program design and connecting activities to intended outcomes.
Evaluating an Integrated Family Planning and Mother/Child Health ProgramMEASURE Evaluation
The document summarizes an evaluation of an integrated family planning and mother/child health program in Bangladesh called the National Service Delivery Program (NSDP). The NSDP aimed to achieve further reductions in fertility by integrating the delivery of family planning services and an essential services package of reproductive and maternal/child health services. It used a network of NGOs operating clinics and village providers to deliver integrated services through a "one-stop" model. Impact was evaluated using a difference-in-difference analysis comparing changes in indicators between program and non-program areas from 1998 to 2005. Results found modest increases in modern contraceptive use and decreases in pregnancy rates, as well as larger effects from the health communication component on antenatal care and
1) Evaluation and impact assessment approaches can be categorized based on their goals (formative, summative, developmental) and methods (qualitative, quantitative, mixed).
2) In organizations like the CGIAR, impact assessment was initially implemented to ensure accountability but now aims to facilitate organizational learning.
3) For impact assessment to influence organizational change in the CGIAR, it needs to help build consensus on needed reforms, develop new frameworks for analyzing social processes, and work with coalitions of supportive stakeholders.
This document discusses research on developing employability, positive values, diversity awareness, and civic engagement among university graduates in order to equip them to create positive change. It reviews literature showing that employability requires more than academic skills, including work experience relevant to graduates' fields. Developing positive values in university is important but challenges remain in ensuring graduates maintain these values after graduating. Promoting diversity awareness meets growing societal needs and supports skills like creativity. University experiences like community engagement make graduates more likely to be engaged citizens who can influence positive change.
Practice question lets take a moment to place the practice queDIPESH30
The document discusses the application of deliberative democracy theory to corporate social responsibility. It describes a theory put forward by Scherer and Palazzo that envisions companies as political actors in a globalized society, participating in deliberative processes involving civil society and governments. This view draws on the theory of deliberative democracy developed by Jurgen Habermas, which derives legitimacy from the involvement of all groups in democratic interaction. However, critics argue that companies may still prioritize their economic interests over public interests in these processes. The document also discusses whether integrated social contract theory or international law standards provide a better normative framework to guide business behavior globally.
This document discusses evaluation methodology for practices in science communication. It begins by noting the lack of systematic evaluation has made it difficult to compare practices, develop theories, and ensure accountability. The author argues for developing a common evaluation language while acknowledging the diversity of science communication. A key challenge is that practices have diverse purposes and actors. The author proposes using program theory and logic models to systematically evaluate practices in an ex post facto manner. This involves practitioners describing the purposes and means of a practice after completion to facilitate evaluation. The discussion considers how to account for change and complexity in program theories. The goal of developing evaluation is to improve practices for public benefit rather than administrative control.
Evaluation for researchers is an important tool in assessing the merit of public and charitable services that everyone can use, and identifying ways in which those services could be improved.
Dr Helen Kara, an evaluation research specialist, presents the key elements of good practice at each stage of the evaluation process, helping you to better understand your research.
To learn more about evaluation download Helen's eBook: Beginners’ Guide to Evaluation - http://bit.ly/1Kr0vsG
Presentation by Lini Wollenberg, Low Emissions Development Leader, CGIAR Research Program on Climate Change, Agriculture and Food Security (CCAFS) at the Green Climate Fund Independent Evaluation Unit Learning-Oriented Real-Time Impact Assessment (LORTA)
Program Inception Workshop
July 24-26, 2018 Bangkok, Thailand
A toolkit for complex interventions and health technologies using normalizati...Normalizationprocess
The document introduces Normalization Process Theory (NPT), a conceptual model for evaluating the implementation and integration of new health technologies and complex interventions. NPT focuses on the work done by individuals and groups to embed interventions in practice. The NPT Toolkit provides managers, clinicians and researchers with a simplified framework based on NPT to assess implementation processes. It includes questions related to coherence, participation, action and appraisal, and allows users to gauge these implementation factors using a visual interface. The toolkit is meant as an aid for critical thinking, not a validated measurement instrument.
The document describes the Knowledge-To-Action Cycle, which consists of an Action Cycle and Knowledge Funnel. The Action Cycle is a 7-phase process for implementing knowledge to create planned changes. It involves identifying knowledge gaps, adapting knowledge to context, assessing barriers, selecting interventions, monitoring use, evaluating outcomes, and sustaining use. The Knowledge Funnel distills knowledge through inquiry, synthesis, and creating tools/products for end-users.
This presentation has a vivid description of the basics of doing a program evaluation, with detailed explanation of the " Log Frame work " ( LFA) with practical example from the CLICS project. This presentation also includes the CDC framework for evaluation of program.
N.B: Kindly open the ppt in slide share mode to fully use all the animations wheresoever made.
The document evaluates Patient Opinion, a web-based patient feedback platform, as a case study to assess its potential for driving quality improvement in healthcare. It finds that while the platform meets patients' needs for accessible, responsive and independent feedback, there is little evidence it systematically leads to quality improvement. For the platform to better support quality goals, organizations need to embed feedback into governance, learning, and change processes and promote customer-focused behaviors. The success of the platform also depends on maintaining traffic, subscriptions and champions within subscriber organizations.
Impact evaluation is used to determine the effectiveness of programs by examining outcomes and determining if goals were achieved. It typically occurs retrospectively on mature programs and uses approaches like objectives-based, needs-based, or process-outcome evaluations to establish what works and why by measuring outcomes rather than just outputs. The major concerns are determining if the program was implemented as planned and what benefits were achieved for participants.
How to structure your table for systematic review and meta analysis – PubricaPubrica
According to the, a systematic review is "a scholarly method in which all empirical evidence that meets pre-specified eligibility requirements is gathered to address a particular research question."
Continue Reading: https://bit.ly/3AeFIYY
For our services: https://pubrica.com/services/research-services/systematic-review/
Why Pubrica:
When you order our services, We promise you the following – Plagiarism free | always on Time | 24*7 customer support | Written to international Standard | Unlimited Revisions support | Medical writing Expert | Publication Support | Biostatistical experts | High-quality Subject Matter Experts.
Contact us:
Web: https://pubrica.com/
Blog: https://pubrica.com/academy/
Email: sales@pubrica.com
WhatsApp : +91 9884350006
United Kingdom: +44-1618186353
A Multidisciplinary Analysis of Research and Performance Synthesis Utilized i...ROBELYN GARCIA PhD
This document discusses research analyzing the use of exercise programs to treat depression in Hispanic seniors. It describes plans to recruit 30 Hispanic seniors aged 50-69 to participate in a study evaluating the effects of a prescribed and monitored exercise program on depression. Participants will complete consent forms, questionnaires on depression and physical readiness, and will then participate in an exercise program while being monitored. They will complete post-program depression questionnaires to determine if symptoms decreased. The research aims to add to evidence that exercise can be an effective treatment for depression in older Hispanic adults.
Theory-Based Approaches for Assessing the Impact of Integrated Systems Research - Brian Belcher, Royal Roads University. Measuring the Impact of Integrated Systems Research (September 27, 2021 – September 30, 2021). Three-day virtual workshop co hosted by the CGIAR Research Programs on Water Land and Ecosystems (WLE); Forests, Trees and Agroforestry (FTA); Policies, Institutions, and Markets (PIM); and SPIA, the Standing Panel on Impact Assessment of the CGIAR. The workshop took stock of existing and new methodological developments of monitoring, evaluation and impact assessment work, and discussed which are suitable to evaluate and assess complex, integrated systems research.
Use of Qualitative Approaches for Impact Assessments of Integrated Systems Research: Our Experience - Monica Biradavolu, SPIA. Measuring the Impact of Integrated Systems Research (September 27, 2021 – September 30, 2021). Three-day virtual workshop co hosted by the CGIAR Research Programs on Water Land and Ecosystems (WLE); Forests, Trees and Agroforestry (FTA); Policies, Institutions, and Markets (PIM); and SPIA, the Standing Panel on Impact Assessment of the CGIAR. The workshop took stock of existing and new methodological developments of monitoring, evaluation and impact assessment work, and discussed which are suitable to evaluate and assess complex, integrated systems research.
This document provides an overview of a study assessing the functionality of the Help Desk Section of the Bureau of Internal Revenue Revenue Data Center – Visayas. The study aims to evaluate the help desk's reporting, resolution, processing, procedures and strategies for addressing problems. It also seeks to identify problems in the help desk's operations to propose improvements. The document outlines the research methodology, which includes distributing questionnaires to help desk users and staff. It also provides context on the Bureau of Internal Revenue and the role of the Revenue Data Center – Visayas Help Desk Section. Key terms related to information technology and the tax system are defined.
The document analyzes data from Help Desk users and staff at the Bureau of Internal Revenue Revenue Data Center - Visayas regarding the functionality of the Help Desk Section. It finds that the Help Desk Section is moderately functional overall, specifically in encouraging users to report issues, problem solving procedures, and strategies for addressing problems. However, it is fully functional in providing resolution and processing issues logged. The document presents these findings through tables analyzing various aspects of the Help Desk Section's functionality based on user and staff assessments.
This document discusses developing a research agenda for impact evaluation in development. It argues the agenda needs to address more than just causal inference challenges, and should cover all aspects of impact evaluation practice. This includes issues like values clarification, measurement, synthesis, and managing joint projects. The research agenda also needs to recognize development that goes beyond discrete projects to include partnerships and community involvement. Developing the agenda requires consultation, identifying gaps, and reviewing various types of research needed like documenting practice, positive deviance studies, and longitudinal studies. Some example research questions are provided.
This document discusses challenges in evaluating human rights progress and techniques that can help. It notes both benefits and drawbacks to measuring results, and challenges like long timeframes and attribution. A theory-driven approach is recommended to identify pathways and indicators to measure short-term outcomes contributing to long-term goals. Gathering diverse feedback, proxies for data, and transparency are also advised. Ongoing learning approaches focus on understanding program design and connecting activities to intended outcomes.
Evaluating an Integrated Family Planning and Mother/Child Health ProgramMEASURE Evaluation
The document summarizes an evaluation of an integrated family planning and mother/child health program in Bangladesh called the National Service Delivery Program (NSDP). The NSDP aimed to achieve further reductions in fertility by integrating the delivery of family planning services and an essential services package of reproductive and maternal/child health services. It used a network of NGOs operating clinics and village providers to deliver integrated services through a "one-stop" model. Impact was evaluated using a difference-in-difference analysis comparing changes in indicators between program and non-program areas from 1998 to 2005. Results found modest increases in modern contraceptive use and decreases in pregnancy rates, as well as larger effects from the health communication component on antenatal care and
1) Evaluation and impact assessment approaches can be categorized based on their goals (formative, summative, developmental) and methods (qualitative, quantitative, mixed).
2) In organizations like the CGIAR, impact assessment was initially implemented to ensure accountability but now aims to facilitate organizational learning.
3) For impact assessment to influence organizational change in the CGIAR, it needs to help build consensus on needed reforms, develop new frameworks for analyzing social processes, and work with coalitions of supportive stakeholders.
This document discusses research on developing employability, positive values, diversity awareness, and civic engagement among university graduates in order to equip them to create positive change. It reviews literature showing that employability requires more than academic skills, including work experience relevant to graduates' fields. Developing positive values in university is important but challenges remain in ensuring graduates maintain these values after graduating. Promoting diversity awareness meets growing societal needs and supports skills like creativity. University experiences like community engagement make graduates more likely to be engaged citizens who can influence positive change.
Practice question lets take a moment to place the practice queDIPESH30
The document discusses the application of deliberative democracy theory to corporate social responsibility. It describes a theory put forward by Scherer and Palazzo that envisions companies as political actors in a globalized society, participating in deliberative processes involving civil society and governments. This view draws on the theory of deliberative democracy developed by Jurgen Habermas, which derives legitimacy from the involvement of all groups in democratic interaction. However, critics argue that companies may still prioritize their economic interests over public interests in these processes. The document also discusses whether integrated social contract theory or international law standards provide a better normative framework to guide business behavior globally.
This document discusses evaluation methodology for practices in science communication. It begins by noting the lack of systematic evaluation has made it difficult to compare practices, develop theories, and ensure accountability. The author argues for developing a common evaluation language while acknowledging the diversity of science communication. A key challenge is that practices have diverse purposes and actors. The author proposes using program theory and logic models to systematically evaluate practices in an ex post facto manner. This involves practitioners describing the purposes and means of a practice after completion to facilitate evaluation. The discussion considers how to account for change and complexity in program theories. The goal of developing evaluation is to improve practices for public benefit rather than administrative control.
Evaluation for researchers is an important tool in assessing the merit of public and charitable services that everyone can use, and identifying ways in which those services could be improved.
Dr Helen Kara, an evaluation research specialist, presents the key elements of good practice at each stage of the evaluation process, helping you to better understand your research.
To learn more about evaluation download Helen's eBook: Beginners’ Guide to Evaluation - http://bit.ly/1Kr0vsG
Presentation by Lini Wollenberg, Low Emissions Development Leader, CGIAR Research Program on Climate Change, Agriculture and Food Security (CCAFS) at the Green Climate Fund Independent Evaluation Unit Learning-Oriented Real-Time Impact Assessment (LORTA)
Program Inception Workshop
July 24-26, 2018 Bangkok, Thailand
A toolkit for complex interventions and health technologies using normalizati...Normalizationprocess
The document introduces Normalization Process Theory (NPT), a conceptual model for evaluating the implementation and integration of new health technologies and complex interventions. NPT focuses on the work done by individuals and groups to embed interventions in practice. The NPT Toolkit provides managers, clinicians and researchers with a simplified framework based on NPT to assess implementation processes. It includes questions related to coherence, participation, action and appraisal, and allows users to gauge these implementation factors using a visual interface. The toolkit is meant as an aid for critical thinking, not a validated measurement instrument.
The document describes the Knowledge-To-Action Cycle, which consists of an Action Cycle and Knowledge Funnel. The Action Cycle is a 7-phase process for implementing knowledge to create planned changes. It involves identifying knowledge gaps, adapting knowledge to context, assessing barriers, selecting interventions, monitoring use, evaluating outcomes, and sustaining use. The Knowledge Funnel distills knowledge through inquiry, synthesis, and creating tools/products for end-users.
This presentation has a vivid description of the basics of doing a program evaluation, with detailed explanation of the " Log Frame work " ( LFA) with practical example from the CLICS project. This presentation also includes the CDC framework for evaluation of program.
N.B: Kindly open the ppt in slide share mode to fully use all the animations wheresoever made.
The document evaluates Patient Opinion, a web-based patient feedback platform, as a case study to assess its potential for driving quality improvement in healthcare. It finds that while the platform meets patients' needs for accessible, responsive and independent feedback, there is little evidence it systematically leads to quality improvement. For the platform to better support quality goals, organizations need to embed feedback into governance, learning, and change processes and promote customer-focused behaviors. The success of the platform also depends on maintaining traffic, subscriptions and champions within subscriber organizations.
Impact evaluation is used to determine the effectiveness of programs by examining outcomes and determining if goals were achieved. It typically occurs retrospectively on mature programs and uses approaches like objectives-based, needs-based, or process-outcome evaluations to establish what works and why by measuring outcomes rather than just outputs. The major concerns are determining if the program was implemented as planned and what benefits were achieved for participants.
How to structure your table for systematic review and meta analysis – PubricaPubrica
According to the, a systematic review is "a scholarly method in which all empirical evidence that meets pre-specified eligibility requirements is gathered to address a particular research question."
Continue Reading: https://bit.ly/3AeFIYY
For our services: https://pubrica.com/services/research-services/systematic-review/
Why Pubrica:
When you order our services, We promise you the following – Plagiarism free | always on Time | 24*7 customer support | Written to international Standard | Unlimited Revisions support | Medical writing Expert | Publication Support | Biostatistical experts | High-quality Subject Matter Experts.
Contact us:
Web: https://pubrica.com/
Blog: https://pubrica.com/academy/
Email: sales@pubrica.com
WhatsApp : +91 9884350006
United Kingdom: +44-1618186353
A Multidisciplinary Analysis of Research and Performance Synthesis Utilized i...ROBELYN GARCIA PhD
This document discusses research analyzing the use of exercise programs to treat depression in Hispanic seniors. It describes plans to recruit 30 Hispanic seniors aged 50-69 to participate in a study evaluating the effects of a prescribed and monitored exercise program on depression. Participants will complete consent forms, questionnaires on depression and physical readiness, and will then participate in an exercise program while being monitored. They will complete post-program depression questionnaires to determine if symptoms decreased. The research aims to add to evidence that exercise can be an effective treatment for depression in older Hispanic adults.
Theory-Based Approaches for Assessing the Impact of Integrated Systems Research - Brian Belcher, Royal Roads University. Measuring the Impact of Integrated Systems Research (September 27, 2021 – September 30, 2021). Three-day virtual workshop co hosted by the CGIAR Research Programs on Water Land and Ecosystems (WLE); Forests, Trees and Agroforestry (FTA); Policies, Institutions, and Markets (PIM); and SPIA, the Standing Panel on Impact Assessment of the CGIAR. The workshop took stock of existing and new methodological developments of monitoring, evaluation and impact assessment work, and discussed which are suitable to evaluate and assess complex, integrated systems research.
Use of Qualitative Approaches for Impact Assessments of Integrated Systems Research: Our Experience - Monica Biradavolu, SPIA. Measuring the Impact of Integrated Systems Research (September 27, 2021 – September 30, 2021). Three-day virtual workshop co hosted by the CGIAR Research Programs on Water Land and Ecosystems (WLE); Forests, Trees and Agroforestry (FTA); Policies, Institutions, and Markets (PIM); and SPIA, the Standing Panel on Impact Assessment of the CGIAR. The workshop took stock of existing and new methodological developments of monitoring, evaluation and impact assessment work, and discussed which are suitable to evaluate and assess complex, integrated systems research.
This document provides an overview of a study assessing the functionality of the Help Desk Section of the Bureau of Internal Revenue Revenue Data Center – Visayas. The study aims to evaluate the help desk's reporting, resolution, processing, procedures and strategies for addressing problems. It also seeks to identify problems in the help desk's operations to propose improvements. The document outlines the research methodology, which includes distributing questionnaires to help desk users and staff. It also provides context on the Bureau of Internal Revenue and the role of the Revenue Data Center – Visayas Help Desk Section. Key terms related to information technology and the tax system are defined.
The document analyzes data from Help Desk users and staff at the Bureau of Internal Revenue Revenue Data Center - Visayas regarding the functionality of the Help Desk Section. It finds that the Help Desk Section is moderately functional overall, specifically in encouraging users to report issues, problem solving procedures, and strategies for addressing problems. However, it is fully functional in providing resolution and processing issues logged. The document presents these findings through tables analyzing various aspects of the Help Desk Section's functionality based on user and staff assessments.
The document discusses COSMOS, an IoT framework that aims to develop technologies for managing exponentially increasing data from connected devices in a smart city context. Key objectives of COSMOS include enabling devices to learn from each other's experiences, extracting valuable knowledge from data flows, and facilitating end-to-end security and privacy. The framework provides capabilities for security, data management, analytics, device management, and social aspects. Example application scenarios discussed are optimizing public transport services using vehicle data and managing building energy use based on real-time appliance information.
Asghar Ali is applying for the position of Manager Administration with Eden Builders in Lahore. He has over 19 years of experience in telecommunications with Pakistan Air Force and PTCL, holding positions such as Chief Technician, Section Commander, and Assistant Manager. He has expertise in areas such as network planning, operations and maintenance, technical problem solving, and staff management. He believes his qualifications and experience make him a strong fit for the Manager Administration role.
Domenico Ragozzino presents on community energy as a grassroots approach to energy transition. Community energy involves low or zero carbon initiatives by groups like schools, businesses, neighborhoods, or entire towns. It can include reducing, managing, and purchasing energy. Projects are funded through donations, grants, and share offers, sometimes with bank loans. They typically involve renewable energy like solar PV, wind, or biomass. Community energy decentralizes, democratizes, demonstrates, and decarbonizes energy systems through various business models centered around social aims. Four key success factors are social capital within the community, knowledge and networking, policy support, and access to funding and support for technical systems. Municipalities can empower community energy through
This document provides biographical information about Assi Dimitrolopoulou, including her educational and professional background as a scenographer, production designer, architect, and lecturer. It details her postgraduate studies in scenography, visual arts, and architecture. It also lists over 80 theatre collaborations in Greece and internationally since 1993. Selected artwork is showcased, including set designs, costumes, and production design for theatre and film projects.
1. This document is a curriculum vitae for Abhishek Kumar that outlines his educational qualifications, technical skills, projects, experience, and personal details.
2. Abhishek holds a B.Tech in computer science from BPUT University and has skills in languages like C, Java, and PHP as well as tools like Oracle, MySQL, and Dreamweaver.
3. His academic projects include an inventory management system and a municipal corporation project developed using technologies like Oracle, Java, and .NET. He also worked as a project member on a coal application system for 2 years.
Project Feasibility Study for the Manufacture of Revolvers aCamilo Abogado
The document provides a feasibility study for a cooperative in Danao City, Philippines called Workers League of Danao - Multipurpose Cooperative (WORLD-MPC) to manufacture revolvers and pistols. WORLD-MPC was established in 1994 and received licensing to manufacture handguns in 1996. The study analyzes the gun making industry, projected demand for firearms domestically based on registration data, and concludes the cooperative could produce a minimum of 452 revolvers annually to capture under 5% of the domestic market.
This document summarizes a student group presentation on whether competition helped achieve the positive effects of privatization. It provides background on privatization and defines it as the transfer of assets from the public to private sector. Examples of privatization in the UK, Korea, and Malaysia are outlined. The UK example highlights the privatization of British Telecom and introduction of competition through other electric firms. Positive impacts of privatization in Korea and Malaysia included improved public transport and increased access to education and healthcare. The conclusion is that competition did help realize the economic and employment benefits of privatization.
The document proposes establishing a community cafe in Merstham, Surrey to address issues of food insecurity and lack of access to healthy foods and cooking skills. The cafe would source surplus food to serve healthy meals in a community setting, offer cookery classes and cultural events, and create volunteering and training opportunities. It outlines the vision, value proposition, customer base, revenue streams, costs, partnerships, and next steps to set up a community interest company and social enterprise to operate the community cafe.
Intro to Scrum for Software Development TeamAna Pegan
Here are the steps to break down a user story into a sprint backlog:
1. The product owner presents the user story to the team:
"As a vacation planner, I want to see potential destinations on a map so I can pick a location."
2. The team discusses what needs to be done to implement the story and breaks it into specific tasks:
- Design database schema for destinations
- Create destinations table
- Add sample destinations to database
- Design map view UI
- Integrate map view into app
- Display destinations on map
3. Estimates are made for each task. Tasks are ordered and pulled into the sprint backlog based on priority, dependencies, and team
A Social Return on Investment (SROI) Analysis of Contemporary Architecture Ce...Domenico Ragozzino
This document summarizes the results of a social return on investment (SROI) analysis conducted on two community gardens in Budapest, Hungary called Kék and Kerthatár. The analysis followed the six stages of an SROI assessment: 1) establishing scope and identifying stakeholders; 2) mapping outcomes; 3) evidencing outcomes and assigning monetary values; 4) establishing impact; 5) calculating the SROI ratio; and 6) reporting and embedding findings. Surveys and interviews with stakeholders found several positive social outcomes of participating in the community gardens, including reduced food costs, improved health, skills development, increased social connections and community belonging, and stress relief from urban living. The SROI analysis monetized these outcomes to calculate
If implemented, will the privatisation of the NHS impact on staff motivation ...Ryan Gill
This document discusses how privatization of the NHS may impact staff motivation levels within the finance team at Cambridgeshire Community Services NHS Trust. It analyzes the potential effects of privatization on the NHS and staff motivation more broadly. The document also examines current staff motivation within the finance team through a survey. It then analyzes Maslow's hierarchy of needs theory and Herzberg's two-factor theory to understand what motivates employees and how managers could improve motivation, especially if privatization were to occur.
IIAlternative Approachesto Program EvaluationPart1.docxsheronlewthwaite
II
Alternative Approaches
to Program Evaluation
Part
109
In Part One, we referred to the varying roles that evaluation studies can play in
education, government, business, nonprofit agencies, and many related areas, and
readers were introduced to some of the different purposes of evaluation. We hinted
at some of the different approaches to evaluation, but we have not yet exposed
the reader to these approaches. We will do so in Part Two.
In Chapter 4, we examine the factors that have contributed to such differing
views. Prior efforts to classify the many evaluation approaches into fewer categories
are discussed, and the categories that we will use in the remainder of this book are
presented.
In Chapters 5 through 8, we describe four categories of approaches that have
influenced evaluation practice. These general approaches include those we see as
most prevalent in the literature and most popular in use. Within each chapter, we
discuss how this category of approaches emerged in evaluation, its primary char-
acteristics, and how it is used today. Within some categories, there are several major
approaches. For example, participatory evaluation has many models or approaches.
We describe each approach, including its distinguishing characteristics and contri-
butions, the ways in which the approach has been used, and its strengths and
weaknesses. Then, in Chapter 9, we discuss other themes or movements in eval-
uation that transcend individual models or approaches, but that are important in-
fluences on evaluation practice today.
Many evaluation books, often authored by the developer of one of the ap-
proaches we discuss, present what Alkin (2004) has called “prescriptive theories”
or approaches to evaluation. These books are intended to describe that approach in
depth and, in fact, to suggest that the approach presented is the one that evalua-
tors should follow. This book does not advocate a particular approach. Instead, we
think it is important for evaluators and students studying evaluation to be famil-
iar with the different approaches so they can make informed choices concerning
which approach or which parts of various approaches to use in a particular eval-
uation. Each approach we describe tells us something about evaluation, perspectives
we might take, and how we might carry out the evaluation. During this time of
increased demands for evaluation in the United States and the world—what
Donaldson and Scriven (2003) have called the “second boom in evaluation”—it is
important for evaluators to be aware of the entire array of evaluation approaches
and to select the elements that are most appropriate for the program they are
evaluating, the needs of clients and other stakeholders, and the context of the
evaluation.
110 Part II • Alternative Approaches to Program Evaluation
Alternative Views
of Evaluation
Orienting Questions
1. Why are there so many different approaches to evaluation?
2. Why is evaluation theory, as reflected in d ...
The field of program evaluation presents a diversity of images a.docxcherry686017
The field of program evaluation presents a diversity of images and claims about the nature and role of evaluation that confounds any attempt to construct a coher- ent account of its methods or confidently identify important new developments. We take the view that the overarching goal of the program evaluation enterprise is to contribute to the improvement of social conditions by providing scientifically credible information and balanced judgment to legitimate social agents about the effectiveness of interventions intended to produce social benefits. Because of its centrality in this perspective, this review focuses on outcome evaluation, that is, the assessment of the effects of interventions upon the populations they are intended to benefit. The coverage of this topic is concentrated on literature published within the last decade with particular attention to the period subsequent to the related reviews by Cook and Shadish (1994) on social experiments and Sechrest & Figueredo (1993) on program evaluation.
The word ‘evaluation’ has become increasingly used in the language of community, health and social services and programs. The growth of talk and practice of evaluation in these fields has often been promoted and encouraged by funders and commissioners of services and programs. Following the interest of funders, has been a growth in the study and practice of evaluation by community, health and social service practitioners and academics. When we consider why this move in evaluative thinking and practice has occurred, we can assume the position of the funder and simply answer, ‘...because we want to know if this program or service works’. Practitioners, specialists and academics in these fields have been called upon by governments and philanthropists to aid the development of effective evaluation. Over time, they have led their own thinking and practice independently. Evaluation in its simplest form is about understanding the effect and impact of a program, service, or indeed a whole organization. Evaluation as a practice is not so simple however, largely because in order to assess impact, we need to be very clear at the beginning what effect or difference we are trying to achieve.
The literature review begins with an overview of qualitative and quantitative research methods, followed by a description of key forms of evaluation. Health promotion evaluation and advocacy and policy evaluation will then be explored as two specific domains. These domains are not evaluation methodologies, but forms of evaluation that present unique requirements for effective community development evaluation. Following this discussion, the review will explore eight key evaluation methodologies: appreciative enquiry, empowerment evaluation, social capital,
social return on investment, outcomes based evaluation, performance dashboards and scorecards and developmental evaluation. Each of these sections will include specific methods, the values base of each methodo ...
This document provides an overview of assessment and evaluation approaches. It discusses educational evaluation standards from organizations in the United States and Philippines. Evaluation approaches are classified based on epistemology, perspective, and orientation. Objectivist approaches use empirical inquiry while subjectivist approaches consider personal experiences. True evaluation determines value, quasi-evaluation may or may not, and pseudo-evaluation promotes views. Various evaluation methods are described like experimental research, testing programs, and accountability studies.
The document provides an overview of the IFRC Framework for Evaluation, which guides how evaluations are planned, managed, conducted, and utilized by the IFRC Secretariat. The framework promotes reliable, useful, and ethical evaluations to contribute to organizational learning, accountability, and the IFRC's mission. It outlines key parts of the framework, including evaluation criteria to guide what is evaluated and standards and processes to guide how evaluations are conducted. The framework is intended to guide those involved in evaluations and inform stakeholders about expected practices.
programme evaluation by priyadarshinee pradhanPriya Das
This document discusses concepts, needs, goals and tools related to program evaluation. It defines evaluation as a systematic process to determine the merit, worth and significance of a program or intervention using set standards and criteria. The primary purposes of evaluation are to gain insight and enable reflection to identify future changes. Some key goals of program evaluation include improving program design, assessing progress towards goals, and determining effectiveness and efficiency. Common tools for program evaluation discussed include interviews, observations, questionnaires, and case studies.
The Utilization of DHHS Program Evaluations: A Preliminary ExaminationWashington Evaluators
Washington Evaluators Brown Bag
by Andrew Rock and Lucie Vogel
October 5, 2010
The presentation will describe a study conducted by the Lewin Group on the utilization of program evaluations in the Department of Health and Human Services for the Assistant Secretary for Planning and Evaluation. The study used an online survey of project officers and managers from a sample of program evaluations selected from the Policy Information Center database. To supplement the survey data, Lewin conducted focus groups with senior staff in six agencies. Key findings of the study focused on direct, conceptual and indirect use and the importance of high quality methods, stakeholder involvement in evaluation design, presence of a champion, and study findings that were perceived to be important. The study concluded with recommendations for a strengthened internal evaluation group within HHS and future research using a case study approach for greater in-depth examination.
Mr. Andrew Rock initiated/conceived and was the Project Officer (COTR) for the study. He works for the Office of Planning and Policy Support in the Office of the Assistant Secretary for Planning and Evaluation (ASPE), HHS. He is responsible for the Department's annual comprehensive report to Congress on HHS evaluations, coordinates the HHS legislative development process, represents his office on the Continuity of Operations Workgroup, and has worked on various cross-cutting issues including homelessness, tribal self-governance, and health reform. In addition to his work in ASPE, he has worked at the Centers for Medicare and Medicaid Services, the Public Health Service, and the Office of the National Coordinator for Health Information Technology.
Ms Lucie Vogel served as a Stakeholder Committee Member for the study. She works in the Division of Planning, Evaluation and Research in the Indian Health Service, developing Strategic and Health Service Master Plans, conducting evaluation studies, and reporting on agency performance. She previously served in evaluation and planning positions in the Food Safety and Inspection Service, the Virginia Department of Rehabilitative Services, the University of Virginia, and the Wisconsin Department of Health and Social Services.
490The Future of EvaluationOrienting Questions1. H.docxblondellchancy
490
The Future of Evaluation
Orienting Questions
1. How are future program evaluations likely to be different from current evaluations in
• the way in which political considerations are handled?
• the approaches that will be used?
• the involvement of stakeholders?
• who conducts them?
2. How is evaluation like some other activities in organizations?
3. How is evaluation viewed differently in other countries?
We have reached the last chapter of this book, but we have only begun to share
what is known about program evaluation. The references we have made to other
writings reflect only a fraction of the existing literature in this growing field. In
choosing to focus attention on (1) alternative approaches to program evaluation,
and (2) practical guidelines for planning, conducting, reporting, and using evalu-
ation studies, we have tried to emphasize what we believe is most important to
include in any single volume that aspires to give a broad overview of such a complex
and multifaceted field. We hope we have selected well, but we encourage students
and evaluation practitioners to go beyond this text to explore the richness and
depth of other evaluation literature. In this final chapter, we share our perceptions
and those of a few of our colleagues about evaluation’s future.
The Future of Evaluation
Hindsight is inevitably better than foresight, and ours is no exception. Yet present
circumstances permit us to hazard a few predictions that we believe will hold true
for program evaluation in the next few decades. History will determine whether
18
Chapter 18 • The Future of Evaluation 491
Predictions Concerning the Profession
of Evaluation
1. Evaluation will become an increasingly useful force in our society. As
noted, evaluation will have increasing impacts on programs, on organizations, and
on society. Many of the movements we have discussed in this text—performance
monitoring, organizational learning, and others—illustrate the increasing interest
in and impact of evaluation in different sectors. Evaluative means of thinking will
improve ways of planning and delivering programs and policies to achieve their
intended effects and, more broadly, improve society.
2. Evaluation will increase in the United States and in other developed
countries as the pressure for accountability weighs heavily on governments and
nonprofit organizations that deliver vital services. The emphasis on accountability
and data-based decision making has increased dramatically in the first decade of
the twenty-first century. Also, virtually every trend points to more, not less, eval-
uation in the public, private, and nonprofit sectors in the future. In some organi-
zations, the focus is on documenting outcomes in response to external political
pressures. In other organizations, evaluation is being used for organizational
growth and development, which should, ultimately, improve the achievement of
those outcomes. In each context, however, evaluation is in dema ...
This is detailed paper on use of evaluatuations to enhance organisational effectiveness with a case study of Advance Afrika, a Uganda based NGO working on re-integration and economic empowerment of ex-convicts
Evaluating the quality of quality improvement training in healthcareDaniel McLinden
Quality Improvement (QI)in healthcare is an increasingly important approach to improving health outcomes, improving system performance and improving safety for patients. Effectively implementing QI methods requires knowledge of methods for the design and execution of QI projects. Given that this capability is not yet widespread in healthcare, training programs have emerged to develop these skills in the healthcare workforce. In spite of the growth of training programs, limited evidence exists about the merit and worth of these programs. We report here on a multi-year, multi-method evaluation of a QI training program at a large Midwestern academic medical center. Our methodology will demonstrate an approach to organizing a large scale training evaluation. Our results will provide best available evidence for features of the intervention, outcomes and the contextual features that enhance or limit efficacy.
This document appears to be a student project submitted for a Master's degree in Commerce. It discusses evaluating the impact of training and development programs. The project was submitted by Amey Milind Patil to the University of Mumbai in partial fulfillment of an M.Com degree under the guidance of Professor Soni Hassani. It includes declarations, certificates, acknowledgments, an index, and outlines several chapters on the introduction, literature review, evaluating training and development, and conclusions.
SOCW 6311 wk 11 discussion 1 peer responses
Respond
to
at least two
colleagues’ by doing the following:
Respond to at least two colleagues by offering critiques of their analyses. Identify strengths in their analyses and strategies for presenting evaluation results to others.
Identify ways your colleagues might improve their presentations.
Identify potential needs or questions of the audience that they may not have considered.
Provide an additional strategy for overcoming the obstacles or challenges in communicating the content of the evaluation reports.
Name first and references after every person
Instructor wants lay out like this:
Respond to at least two colleagues ( 2 peers posts are provided) by doing all of the following:
Identify strengths of your colleagues’ analyses and areas in which the analyses could be improved.
Your response
Address his or her evaluation of the efficacy and applicability of the evidence-based practice,
Your response
[Evaluate] his or her identification of factors that could support or hinder the implementation of the evidence-based practice,
Your response
And [evaluate] his or her solution for mitigating those factors.
Your response
Offer additional insight to your colleagues by either identifying additional factors that may support or limit implementation of the evidence-based practice or an alternative solution for mitigating one of the limitations that your colleagues identified.
Your response
References
Your response
Peer 1: McKenna Bull
RE: Katie Otte Initial Post-Discussion 1 - Week 11
COLLAPSE
Top of Form
Identify strengths in their analyses and strategies for presenting evaluation results to others.
You provided an insightful analysis of this particular process evaluation, and it seems that you were able to design a comprehensive presentation guideline. I agree with your tactic to break the presentation up into categories, and the categories you have selected seem to address the major components of the program, the evaluation itself, and the findings of said evaluation. You also provided a great analysis and summary of the PATHS program. The purpose of the program is clear, and the overarching purpose of the evaluation was made clear in your synopsis as well.
Identify ways your colleagues might improve their presentations.
You addressed outcome measures very well, however, there may have been some lacking information in regards to overall evaluation methods as a whole. Addressing factors such as who was collecting the data, how they were trained, how their training or standing could limit potential bias, and similar information. This may be an important piece of information that could help to provide audience members with a better understanding of the evaluation processes as a whole.
Identify potential needs or questions of the audience that they may not have considered.
As mentioned by Law and Shek (2011), this program was designed and facilitated in Hong Kong, Chi.
This document provides guidance on evaluating nutrition initiatives. It outlines key steps to developing an evaluation framework, including: defining objectives; selecting process, outcome and impact indicators; and choosing appropriate data collection methods. The summary should evaluate the intervention, not just describe it. An effective evaluation demonstrates the value of the initiative and whether objectives were achieved.
This document outlines assessment methodology and program evaluation. It defines key terms like assessment, methodology, and evaluation. It describes the four main steps of assessment methodology as developing guidelines, designing the approach, collecting and analyzing evidence, and reporting findings. It discusses types of program evaluation like formative, process, outcome, and impact evaluations. It states that outcome/summative evaluation would benefit understanding dietary assessment by evaluating program effects on health outcomes. The benefits of thorough program evaluation include determining efficiency, improving operations, and providing required knowledge.
The implementation 'black box' and evaluation as a driver for change. Presentation by Katie Burke and Claire Hickey of the Centre for Effective Services.
The document discusses conducting an objective, credible, and fair program evaluation by employing internal and external advisors to align with professional evaluation standards, as well as using multiple data collection methods to increase validity and understanding per Fitzpatrick's advice. It also notes the importance of the evaluator examining their own biases to mitigate against bias in the formal evaluation process and ensuring evaluation reports are not distorted in presentation. The document recommends following the American Evaluation Association's Guiding Principles for program evaluations.
SOCW 6311 WK 1 responses Respond to at least two colleagues .docxsamuel699872
SOCW 6311 WK 1 responses
Respond to at least two colleagues
(You have to compare my post to 2 SEPARATE peer posts and respond to their posts and ask a question I have provided all three)
by noting the similarities and differences in the factors that would support or impede your colleague’s implementation of evidence-based practice as noted in his or her post to those that would impact your implementation of evidence-based practice as noted in your original post. Offer a solution for addressing one of the factors that would impede your colleague’s implementation of evidence-based practice.
IT does not have to be long but has to in text citation and full references
MY POST
SummerLove Holcomb
RE: Discussion - Week 1
Top of Form
The Characteristics of the evidence-based practice (EBP)
The evidence-based program is defined as the programs that are effective and this is based on the rigorous assessment. One of the key features of EBP is that they have been assessed thoroughly in an experimental or quasi-experimental study. The evaluation of the EBP has been subjected to critical peer review and this implies that a conclusion has been reached by the evaluation experts. The EBP requires the ability to differentiate between the unverified opinions concerning the psychosocial interventions and the facts about their effectiveness. It is involving the process of inquiry that is provided to the practitioners and described for the physicians. This is important in integrating the best evidence, clinical expertise, and patient values as well as the situations that are linked to the management of the patient, management of the practice, and health policy decision-making processes (Small & O'Connor, 2007).
The assessment of the factors that are supporting or impeding the adoption of the evidence-based practice
Several factors are associated with the failure to the successful adoption of EBP. The implementation of EBP for example in healthcare facilities requires the dedication of time. Therefore, lack of adequate time for the training and implementation of the EBP makes it hard to adopt it within the facility. The adoption of evidence-based practice also requires adequate resources. This, therefore, implies that there must be adequate resources to facilitate the effective implementation and the adoption of the EBP. This, therefore, implies that smaller organizations with unstable capital income might not adopt the EBP. Another barrier is the inability to understand the statistical terms or the jargons used in the EBP. This leads to barriers in understanding thus making it hard to implement the EBP (Duncombe, 2018). Therefore, the factors that might support the implementation of the EBP are the availability of resources and adequate time.
References
Duncombe, D. C. (2018). A multi‐institutional study of the perceived barriers and facilitators to implementing evidence‐based practice. Journal of Clinical Nursing,.
Methods Of Program Evaluation. Evaluation Research Is OfferedJennifer Wood
This document discusses different approaches to evaluation research and program evaluation. It provides examples of different types of evaluation research, such as problem analysis, evidence-based policy, and evidence generation. It also discusses publication bias in medical informatics evaluation research and evaluates the training evaluation process for a dinner event. Key aspects of performance evaluations and the challenges associated with the performance evaluation process are outlined as well. Different participant-oriented approaches to evaluation like participatory evaluation, developmental evaluation, and empowerment evaluation are also presented.
Chapter 5 Program Evaluation and Research TechniquesCharlene R. .docxchristinemaritza
Chapter 5 Program Evaluation and Research Techniques
Charlene R. Weir
Evaluation of health information technology (health IT) programs and projects can range from simple user satisfaction for a new menu or full-scale analysis of usage, cost, compliance, patient outcomes, and observation of usage to data about patient's rate of improvement.
Objectives
At the completion of this chapter the reader will be prepared to:
1.Identify the main components of program evaluation
2.Discuss the differences between formative and summative evaluation
3.Apply the three levels of theory relevant to program evaluation
4.Discriminate program evaluation from program planning and research
5.Synthesize the core components of program evaluation with the unique characteristics of informatics interventions
Key Terms
Evaluation, 72
Formative evaluation, 73
Logic model, 79
Program evaluation, 73
Summative evaluation, 73
Abstract
Evaluation is an essential component in the life cycle of all health IT applications and the key to successful translation of these applications into clinical settings. In planning an evaluation the central questions regarding purpose, scope, and focus of the system must be asked. This chapter focuses on the larger principles of program evaluation with the goal of informing health IT evaluations in clinical settings. The reader is expected to gain sufficient background in health IT evaluation to lead or participate in program evaluation for applications or systems.
Formative evaluation and summative evaluation are discussed. Three levels of theory are presented, including scientific theory, implementation models, and program theory (logic models). Specific scientific theories include social cognitive theories, diffusion of innovation, cognitive engineering theories, and information theory. Four implementation models are reviewed: PRECEDE-PROCEED, PARiHS, RE-AIM, and quality improvement. Program theory models are discussed, with an emphasis on logic models.
A review of methods and tools is presented. Relevant research designs are presented for health IT evaluations, including time series, multiple baseline, and regression discontinuity. Methods of data collection specific to health IT evaluations, including ethnographic observation, interviews, and surveys, are then reviewed.
Introduction
The outcome of evaluation is information that is both useful at the program level and generalizable enough to contribute to the building of science. In the applied sciences, such as informatics, evaluation is critical to the growth of both the specialty and the science. In this chapter program evaluation is defined as the “systematic collection of information about the activities, characteristics, and results of programs to make judgments about the program, improve or further develop program effectiveness, inform decisions about future programming, and/or increase understanding.”1 Health IT interventions are nearly always embedded in ...
1. Running head: METAEVALUATION 1
Metaevaluation: Evaluation of an Evaluation,
Evaluation System, or Evaluation Device
Michelle Apple
Chattahoochee Technical College
Office of Institutional Effectiveness
2. METAEVALUATION 2
Metaevaluation: Evaluation of an Evaluation,
Evaluation System, or Evaluation Device
The results of a literature search on the topic of metaevaluation are compiled and
summarized within this paper. The paper includes tables and lists of components of a
metaevaluation. The standards (Performance Evaluation Standards or PES) compiled by the Joint
Committee on Standards for Educational Evaluation are included, as well as checklists for
performing a metaevaluation as compiled by the employees of the Western Michigan University
Evaluation Center and shared on their center website. Much of the work of Daniel L. Stufflebeam
is included within this paper.
Definition and History
Metaevaluation first was defined by Dr. Scriven in his 1969 work, Educational Products
Report. He defined metaevaluation as “any evaluation of an evaluation, evaluation system or
evaluation device” (Stufflebeam, 2001). That definition is expanded to the following operational
definition: “the process of delineating, obtaining, and applying descriptive information and
judgmental information- about the utility, feasibility, propriety, and accuracy of an evaluation
and its systematic nature, competent conduct, integrity/honesty, respectfulness, and social
responsibility” (Stufflebeam, 2001).
Stufflebeam states that metaevaluation is needed of all types of evaluations: programs,
projects, products, systems, institutions, theories, models, students, and personnel (Stufflebeam,
2001). Metaevaluation can answer the question of defensibility of the evaluation process and
3. METAEVALUATION 3
results, thereby providing added potential that the data of the evaluation will be utilized by the
organization (Cooksy, & Caracelli, 2005). Stufflebeam (2001) has one caution for
metaevaluations in other countries or cultures to carefully consider the standards of the
metaevaluation tool as many tools were created for the Western evolution (the USA and Europe)
and may not be appropriate for use in other regions of the world without modification.
Metaevaluation gained momentum with the legislation in 1998 requiring evidence of
effectiveness of educational programs (Slavin, 2002) and further bolstered through additional
educational efforts by President Clinton and again by President Bush’s “No Child Left Behind”
initiatives (Slavin, 2002).
Evaluation Types and Models
There have been identified two broad types of metaevaluation: proactive (performed
before the initial evaluation) and retroactive (judges the complete evaluations) (Hanssen,
Lawrenz, & Dunet, 2008). Evaluation tool framework components have been grouped into
categories by several researchers. The Program Evaluation Standards (PES) identifies the
categories as utility, feasibility, propriety, and accuracy (Lynch, Greer, Larson, Cumming,
Harriett, Dreyfus, & Clay, 2003). Further, evaluation tools themselves have been grouped into
categories by researchers such as Linzalone and Schiuma (2015) has in the table below.
5. METAEVALUATION 5
Ten Steps of Metaevaluation Process
Ten general steps have been identified by Stufflebeam as the basis of the metaevaluation
process (2000). First, determine and arrange interaction with the stakeholders for the
metaevaluation. Second, establish a qualified team to perform the metaevaluation. Third, define
the questions for the metaevaluation. Fourth, agree on the standards upon which the evaluation
will be judged. Fifth, frame the contract for the metaevaluation. Sixth, collect and review the
existing, pertinent information. Seventh, collect needed new information. Eighth, analyze and
judge the adherence to the evaluation standards. Ninth, prepare and submit reports to the
stakeholders. And finally, tenth, help the stakeholders interpret and apply the findings.
Strongest Evaluation Approaches
Stufflebeam has worked within the metaevaluation industry since its inception. Over
time, he has written many papers and completed several studies and literature searches on the
topic of metaevaluation. Within one paper (Stufflebeam, 2001, Spring) he rated the strongest
6. METAEVALUATION 6
program evaluation approaches as they complied with the Program Evaluation Standards (PES)
that had developed as one of the two major standards for metaevaluation. That table appears
below.
8. METAEVALUATION 8
Purpose of Metaevaluation
Metaevaluations investigate how evaluations are implemented and then can be improved,
how worthwhile, significance or importance to the stakeholders, and how the costs (including
opportunity costs) compare to benefits (Yarbrough, Shulha, Kopson, & Caruthers, 2011).
Metaevaluations can investigate a single evaluation or single evaluation component. Meta
evaluations can address the long-term impact and utility of specific program evaluations or
evaluation approaches. However, a metaevaluation can also investigate and compare two or more
evaluations or subcomponents. This type of metaevaluation can be utilized as a part of evaluation
planning (Yarbrough, et al, 2011). Another important aspect of the metaevaluation is to
communicate the strengths and limitations of the evaluation to the stakeholders.
Program Evaluation Standards
The Joint Committee on Standards for Educational Evaluation compiled a set of
evaluation standards that are listed below in the most recent revision. As has been stated
previously, these standards statements are grouped in categories: utility, feasibility, propriety,
accuracy and accountability. The following was retrieved from www.jcsee.org/program-
evaluation-standards-statements although the website references Yarbrough, Shulha, Hopson, &
Caruthers (2011).
Utility Standards
The utility standards are intended to increase the extent to which program stakeholders find evaluation
processes and products valuable in meeting their needs.
U1 Evaluator Credibility Evaluations should be conducted by qualified people who establish
and maintain credibility in the evaluation context.
9. METAEVALUATION 9
U2 Attention to Stakeholders Evaluations should devote attention to the full range of
individuals and groups invested in the program and affected by its evaluation.
U3 Negotiated Purposes Evaluation purposes should be identified and continually
negotiated based on the needs of stakeholders.
U4 Explicit Values Evaluations should clarify and specify the individual and cultural values
underpinning purposes, processes, and judgments.
U5 Relevant Information Evaluation information should serve the identified and emergent
needs of stakeholders.
U6 Meaningful Processes and Products Evaluations should construct activities,
descriptions, and judgments in ways that encourage participants to rediscover, reinterpret, or
revise their understandings and behaviors.
U7 Timely and Appropriate Communicating and Reporting Evaluations should attend
to the continuing information needs of their multiple audiences.
U8 Concern for Consequences and Influence Evaluations should promote responsible
and adaptive use while guarding against unintended negative consequences and misuse.
Feasibility Standards
The feasibility standards are intended to increase evaluation effectiveness and efficiency.
F1 Project Management Evaluations should use effective project management strategies.
F2 Practical Procedures Evaluation procedures should be practical and responsive to the
way the program operates.
F3 Contextual Viability Evaluations should recognize, monitor, and balance the cultural and
political interests and needs of individuals and groups.
F4 Resource Use Evaluations should use resources effectively and efficiently.
Propriety Standards
The propriety standards support what is proper, fair, legal, right and just in evaluations.
P1 Responsive and Inclusive Orientation Evaluations should be responsive to
stakeholders and their communities.
P2 Formal Agreements Evaluation agreements should be negotiated to make obligations
explicit and take into account the needs, expectations, and cultural contexts of clients and
other stakeholders.
P3 Human Rights and Respect Evaluations should be designed and conducted to protect
human and legal rights and maintain the dignity of participants and other stakeholders.
10. METAEVALUATION 10
P4 Clarity and Fairness Evaluations should be understandable and fair in addressing
stakeholder needs and purposes.
P5 Transparency and Disclosure Evaluations should provide complete descriptions of
findings, limitations, and conclusions to all stakeholders, unless doing so would violate legal
and propriety obligations.
P6 Conflicts of Interests Evaluations should openly and honestly identify and address real
or perceived conflicts of interests that may compromise the evaluation.
P7 Fiscal Responsibility Evaluations should account for all expended resources and comply
with sound fiscal procedures and processes.
Accuracy Standards
The accuracy standards are intended to increase the dependability and truthfulness of evaluation
representations, propositions, and findings, especially those that support interpretations and
judgments about quality.
A1 Justified Conclusions and Decisions Evaluation conclusions and decisions should be
explicitly justified in the cultures and contexts where they have consequences.
A2 Valid Information Evaluation information should serve the intended purposes and
support valid interpretations.
A3 Reliable Information Evaluation procedures should yield sufficiently dependable and
consistent information for the intended uses.
A4 Explicit Program and Context Descriptions Evaluations should document programs
and their contexts with appropriate detail and scope for the evaluation purposes.
A5 Information Management Evaluations should employ systematic information collection,
review, verification, and storage methods.
A6 Sound Designs and Analyses Evaluations should employ technically adequate designs
and analyses that are appropriate for the evaluation purposes.
A7 Explicit Evaluation Reasoning Evaluation reasoning leading from information and
analyses to findings, interpretations, conclusions, and judgments should be clearly and
completely documented.
A8 Communication and Reporting Evaluation communications should have adequate
scope and guard against misconceptions, biases, distortions, and errors.
Evaluation Accountability Standards
The evaluation accountability standards encourage adequate documentation of evaluations and a
metaevaluative perspective focused on improvement and accountability for evaluation processes and
products.
11. METAEVALUATION 11
E1 Evaluation Documentation Evaluations should fully document their negotiated
purposes and implemented designs, procedures, data, and outcomes.
E2 Internal Metaevaluation Evaluators should use these and other applicable standards to
examine the accountability of the evaluation design, procedures employed, information
collected, and outcomes.
E3 External Metaevaluation Program evaluation sponsors, clients, evaluators, and other
stakeholders should encourage the conduct of external metaevaluations using these and other
applicable standards.
Western Michigan University Evaluation Center employs Stufflebeam and other
metaevaluation experts. The Evaluation Center has compiled numerous documents and checklists
for use in evaluation and metaevaluation. This website contains, in particular, four items
concerning metaevaluation that are available in this document in the appendix (appendix
documents one through four).
12. METAEVALUATION 12
References
Cooksy, L. J., & Caracelli, V. J. (2005, March). Quality, context, and use: Issues in achieving the
goals of metaevaluation. American Journal of Evaluation, 26(1), 31-42.
doi:10.1177/1098214004273252
Emery, C. R., Kramer, T. R., & Tian, R. G. (2003). Return to academic standards: A critique of
student evaluations of teaching effectiveness. Quality Assurance in Education, 11(2), 37-
46. doi:10.1108/09684880310462074
Hansen, C. E., Lawrenz, F., & Dunet, D. O. (2008, December). Concurrent meta-evaluation: A
critique. American Journal of Evaluation, 29(4) 572-582.
doi:10.1177/1098214008320462
Linzalone, R., & Schiuma, G. (2015). A review of program and project evaluation models.
Measuring Business Excellence, 19(3), 90-99. doi:10.1108/MBE-04-2015-0024
Lynch, D. C., Greer, A. G., Larson, L. C., Cummings, D. M., Harriett, B. S., Dreyfus, K. S., &
Clay, M. C. (2003, December). Descriptive metaevaluation: Case study of an
interdisciplinary curriculum. Evaluation & the Health Professions, 26(4), 447-461.
doi:10.1177/0163278703258099
Slavin, R. E. (2002, October). Evidence-based education policies: Transforming educational
practice and research. Educational Researcher, 31(7), 15-21.
Stufflebeam, D. L. (2001, Spring). Evaluation models-Abstract. New Directions for Evaluation,
(89).
13. METAEVALUATION 13
Stufflebeam, D. L. (2001). The metaevaluation imperative. American Journal of Evaluation,
22(2), 183-209.
Stufflebeam, D. L. (2000, March). The methodology of metaevaluation as reflected in
metaevaluations by the Western Michigan University Evaluation Center. Journal of
Personnel Evaluation in Education, 14(1), 95-125.
Yarbrough, D. B., Shula, L. M., Hopson, R. K., & Caruthers, F. A. (2011). The Program
Evaluation Standards (3rd ed.). Los Angeles: CA, Sage.
14. METAEVALUATION 14
Appendix
Appendix Document 1: Guiding Principles Checklist for Evaluating Evaluations based on the
2004 Guiding Principles for Evaluators.
Appendix Document 2: Program Evaluation Models Metaevaluation Checklist (Based on The
Program Evaluation Standards).
Appendix Document 3: Program Evaluations Metaevaluation Checklist (Based on The Program
Evaluation Standards)—Long form.
Appendix Document 4: Program Evaluations Metaevaluation Checklist (Based on The Program
Evaluation Standards)—Short form.
15. 1
GUIDING PRINCIPLES CHECKLIST
for Evaluating Evaluations
based on the 2004 Guiding Principles for Evaluators
1
Checklist Developers: Daniel L. Stufflebeam, Leslie Goodyear, Jules Marquart, Elmima Johnson
September 19, 2005
DRAFT DRAFT DRAFT DRAFT DRAFT DRAFT
This checklist is designed to help evaluators apply the American Evaluation Association’s Guiding Principles for
Evaluators in formative and summative metaevaluations. The checklist’s contents generally adhere to and in some
cases are taken verbatim from the Guiding Principles. Rewriting and/or expansion of the language in the Principles
were done to enhance concreteness and systematic application. This checklist is a revision of the previous 2001
version.2
Any distortions of material drawn from the Principles is unintentional and the checklist developers’
responsibility. Otherwise, credit for the content underlying the checklist belongs to the AEA task force members
who developed the 1995 version of the Principles and the 2004 revision.3
This checklist was developed pursuant to
a request from the 2005 AEA Ethics Committee. However, the American Evaluation Association was not asked to
endorse this checklist, and no claim is made as to the Association’s judgment of the checklist. The developers
intend that this checklist be used to facilitate use of the Guiding Principles toward the aim of producing sound
evaluations.
While the Guiding Principles were developed to guide and assess the services of evaluators, they are also
advocated for use in evaluating individual evaluations. The American Journal of Evaluation provides space for
publishing critiques and comments on previously completed evaluation studies. Contributing “meta-evaluators” are
asked to “ . . . critique the studies, by using wherever possible the existing standards and guidelines published by
AEA (the ‘guiding principles’) and the Joint Committee (the ‘evaluation standards’) . . . .” This checklist is offered
as a tool to assist metaevaluators to apply the Principles to actual studies.
The checklist is not designed as a stand-alone device for reporting metaevaluation findings. Essentially, it
is a format for a metaevaluator’s database. It provides the metaevaluator a means to systematically apply the
Guiding Principles’ concepts in compiling, organizing, analyzing, and formatting findings. It is intended that
metaevaluators use the results from applying the checklist to prepare and deliver user-friendly reports. However,
where appropriate, the completed checklist can be included in the metaevaluation’s technical appendix.
AJE’s advice to ground metaevaluations in both the “guiding principles” and the “evaluation standards”
should be underscored. The Guiding Principles and the Joint Committee (1994) Program Evaluation Standards are
compatible; both have limitations; and both have valuable, unique qualities. Evaluators should employ them as
complementary codes and, as appropriate, use them in concert. The “guiding principles” provide only general,
although vital advice (systematic inquiry, competence, integrity/honesty, respect for people, responsibilities for
general and public welfare) to evaluators for delivering ethical and competent service throughout their careers.
They do not include details for applying the general career-oriented principles to individual studies. While the
Standards focus exclusively on educational evaluations, they present detailed criteria for assessing an evaluation’s
utility, feasibility, propriety, and accuracy. Experience and studies have shown that the Standards can be usefully
adapted for assessing and guiding evaluations outside the education field. Checklists for applying the Standards are
available at <www.wmich.edu/evalctr/checklists>.
16. 2
Application Suggestions
1. As a preliminary step, characterize the target evaluation. Use Form 1 to describe the following items related to the
target evaluation: client, financial sponsor, cost of the target evaluation, time frame for the target evaluation,
evaluator(s), stakeholder groups, program or other object evaluated, purpose(s) of the target evaluation, key questions,
methodology, reports, apparent strengths, and apparent weaknesses. Using the same form, summarize the target
evaluation in a succinct paragraph.
2. Characterize the metaevaluation. Use Form 2 to describe the following items related to the metaevaluation: title of the
metaevaluation; client; financial sponsor (if different from the client); other audiences; purpose(s); metaevaluation
services/reports to be provided; metaevaluative questions; methods; schedule; metaevaluator(s); their relationship(s),
if any, to the target evaluation; their relationship(s), if any, to the object of the target evaluation; and main contractual
agreements.
3. Collect and study information needed to judge the evaluation. Applicable documents may include contracts, proposals,
interim and final reports, newspaper articles, meeting minutes, correspondence, press releases, file notes, court
affidavits and depositions, publications, etc. Other sources could include notes from telephone interviews and site
visits. One might also provide selected stakeholders a copy of this checklist and invite them to submit information
needed to arrive at the noted judgments. Use Form 3 to make a convenient list of the employed sources.
4a. Work through the checkpoints for each of the 5 principles (Forms 4, 7, 10, 13, and 16). Determine whether each
checkpoint is applicable to the particular evaluation. Write NA on the lines to the left of the nonapplicable
checkpoints. For the remaining checkpoints, place a plus (+), minus (-), or question mark (?) on the lines to the left of
the applicable checkpoints. (A + means an evaluation met the checkpoint’s intent, and a - means the evaluation failed
to meet the checkpoint’s intent.) Base your + and - conclusions on your judgment of whether the evaluation met the
checkpoint’s intent. Place a ? in the box if you have insufficient information to reach a judgment. Place a * in the
indicated space under the column marked minimum requirement for any item you judge to be essential for passing the
particular principle. As feasible, collect additional information needed to reach judgments about the criteria for which
too little information is on hand.
4b. In the spaces provided (on Forms 4, 7, 10, 13, and 16), write the identifying number of each document (including
applicable page numbers) or other information source (including comments as appropriate) you used to help make a
judgment about each checkpoint.
4c. In the spaces provided at the right side of Forms 4, 7, 10, 13, and 16, record noteworthy rationales for your judgments
(of +, -, NA, ?, and *).
5. When feasible (as defined below), rate the target evaluation on each principle by following the instructions for
quantitative analysis that appear following each set of checkpoints (Forms 5, 8, 11, 14, and 17). Do this for each
principle only if you have been able to assign + or - ratings as follows: 8 of the 9 checkpoints for Principle A, 6 of the
7 checkpoints for Principle B, 12 of the 21 checkpoints for Principle C, 10 of the 16 checkpoints for Principle D, and
10 of the 13 checkpoints for Principle E. A quantitative analysis result would be dubious for any principle or all of
them collectively to the extent that many checkpoints have to be marked NA or ?. A quantitative analysis should not
be done for any principle for which a checkpoint marked * (minimum requirement) is not met and should not be done
overall if any checkpoint marked * is not met. Rate adherence to any principle Poor if an item marked * (minimum
requirement) was not met. Consider the cut points given for making judgments of Poor, Marginal, Moderate, Good,
and Excellent as general guides. Revise the cut points as you deem appropriate and provide your rationale for the
revised cut points.
6. Provide an overall narrative assessment of the evaluation’s satisfaction of each principle and overall in the spaces
provided for qualitative analysis following each set of checkpoints (Forms 6, 9, 12, 15, 18, and 21).
7. Assess the target evaluation’s sufficiency of documentation on Form 19. An evaluation should be judged Poor for
any principle or overall if it contains insufficient credible evidence to support its conclusions.
8. If feasible, assign an overall rating of the target evaluation, across all 5 principles, by following the instructions for an
overall quantitative analysis that appear in Form 20. This will be feasible only if you have been able—under the
decision rules in 5 above—to perform quantitative analyses on all 5 principles in Forms 5, 8, 11, 14, and 17 and only
if the evaluation has passed all the minimum requirement (*) items.
17. 3
9. In the qualitative analysis form (21), briefly present your overall evaluation of the target evaluation. Do this by
thoughtfully considering and synthesizing all of the information and judgments that you recorded in Forms 1 through
20.
10. Caveat: Temper your metaevaluation conclusions according to the sufficiency of evidence pertaining to the
applicable checkpoints. Conclusions concerning any or all of the Guiding Principles should be tentative to the extent
that needed evidence is lacking. However, as a general rule, evaluation comments on the target evaluation’s
sufficiency of documentation are appropriate. In many metaevaluations it will not be feasible to perform sensible
quantitative analyses. The qualitative analyses that are always appropriate should clearly state limitations regarding
such matters as the sufficiency of evidence and the feasibility of rating the evaluation by quantitative means.
11. Decide how to report the information in the completed checklist. Form 22 may be used to present your bottom-line
judgments of the evaluation’s satisfaction of each guiding principle. Sometimes the metaevaluator will employ the
completed checklist only as a working document for preparing a summative evaluation report, such as an article for
AJE. In such cases, the metaevaluator might appropriately retain and not share the completed checklist; he or she
would simply use it to produce other, more user-friendly communications for clients and other stakeholders.
However, sometimes it will be helpful to include the completed checklist in a technical appendix to the
metaevaluation report. Determinations on these matters should be guided by considerations of how best to inform the
audience and secure its interest in and use of the findings and how best to assure the metaevaluation’s validity,
credibility, and accountability.
12. If the metaevaluators decide to share the completed checklist, they may use Form 23 to sign and date the checklist,
thereby attesting to their assessment of the target evaluation.
18. 4
Form 1: Characterization of the Target Evaluation
Use this form to record basic information and initial impressions concerning the target evaluation.
Title of the target evaluation:
Client of the target evaluation:
Financial sponsor of the target evaluation (if different from the client):
Estimated Cost of the target evaluation:
Time frame for the target evaluation (e.g., from initial contract date to final report deadline):
Evaluator(s):
Stakeholder groups:
Program or other object of the target evaluation:
Purpose(s) of the target evaluation (e.g., bettering products, personnel, programs, organizations, governments, consumers and
the public interest; contributing to informed decision making and more enlightened change; precipitating needed change;
empowering all stakeholders by collecting data from them and engaging them in the evaluation process; and generating new
insights):
19. 5
Key evaluative questions:
Main methods:
Key reports to be provided by those in charge of the target evaluation:
Apparent strengths of the target evaluation:
Apparent weaknesses of the target evaluation:
Brief narrative description of the target evaluation:
20. 6
Form 2: Key Points Regarding the Metaevaluation
Use this form to record basic information about the metaevaluation.
Title of the metaevaluation:
Client of the metaevaluation:
Financial sponsor of the metaevaluation (if different from the client):
Other audiences for the metaevaluation:
Purpose(s) of the metaevaluation (e.g., formative and/or summative):
Key metaevaluation services/reports to be provided:
Key metaevaluative questions:
Main metaevaluation methods:
21. 7
Schedule and due dates for the metaevaluation:
Title and date of the metaevaluation contract:
Metaevaluator(s):
Name:
Title:
Affiliation:
Relationship(s), if any, of the metaevaluator(s) to the target evaluation:
Relationship(s), if any, of the metaevaluator(s) to the object of the target evaluation:
22. 8
Form 3: Main Documents and Other Information Sources Referenced in Judging the Target Evaluation
Rarely can a metaevaluator succeed in producing a substantive, defensible evaluation of an evaluation by simply reading the
final evaluation report. In undertaking a metaevaluation it is important to collect a range of relevant documents and, as
feasible, to interview stakeholders. The information used to form metaevaluative judgments is often found in documents
such as contracts, proposals, evaluation instruments, correspondence, interim and final reports, technical appendices,
newspaper articles, meeting minutes, correspondence, press releases, file notes, publications, etc. Beyond obtaining and
studying such documents, metaevaluators should consider conducting site visits and/or telephone interviews to obtain
information and judgments from the evaluation’s stakeholders. Such stakeholders include the evaluator, client, program
staff, program beneficiaries, and others. It can also be useful to provide selected stakeholders with copies of this checklist
and invite them to provide needed information that is otherwise unavailable. Explicitly listing and systematically
referencing the documents and other sources of information on which a metaevaluation is based is not always necessary,
especially in the case of small scale, formative metaevaluations. However, such documentation can be invaluable when
there is a clear need to convince external audiences that the provided metaevaluation judgments are valid and credible.
Instructions: When documenting the basis for judgments, number each source of information used to judge the target
evaluation from 1 - n. Provide a label for each document or other source of information below to the right of the pertinent
identification number. (When you record your judgments for each checkpoint—on Forms 4, 7, 10, 13, and 16—in the
provided spaces you may record the identification number, relevant page numbers (of referenced documents), and
comments concerning other information sources you used to reach your judgments.)
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
Insert additional pages of references, as needed.
23. 9
Form 4: Guiding Principle A—SYSTEMATIC INQUIRY
For each checkpoint, assign a NA, +, -, or ?
To meet the requirements for
conducting SYSTEMATIC,
DATA-BASED INQUIRY, the
evaluators did:
Provide the
identification
number(s) of
information
sources (from the
list on Form 3)
you used to reach
your judgment.
Provide pertinent
page numbers for
the referenced
documents and
comments, as
appropriate for the
other sources.
Place a *
below for any
checkpoint
that you judge
to be a
minimum
requirement,
i.e., essential
for meeting
this principle
whether or
not all other
checkpoints
are met.
Provide noteworthy rationales for your
judgments of NA, +, -, or ?.
JUDGMENT SOURCE MINIMUM
REQUIREM
ENT
CHECK-
POINTS (*)
RATIONALE
A.1
Meet the highest technical
standards for quantitative
methods—so as to obtain
and report accurate, credible
information. Key examples
are the APA Standards for
Educational and
Psychological Testing and
the Accuracy section of the
Joint Committee Program
Evaluation Standards.
A.2
Meet the highest technical
standards for qualitative
methods—so as to obtain
and report accurate, credible
information. The Accuracy
section of the Joint
Committee Program
Evaluation Standards is
applicable here.
24. 10
A.3
Engage the client in exploring
the shortcomings and
strengths of a reasonable
range of potential evaluation
questions
A.4
Engage an appropriate range
of stakeholders in exploring
the shortcomings and
strengths of a reasonable
range of potential evaluation
questions
A.5
Engage the client in
considering shortcomings
and strengths of the
evaluation approaches and
methods for answering the
agreed upon evaluation
A.6
____Clearly and fully inform the
client and stakeholder about all
aspects of the evaluation, from its
initial conceptualization to the
eventual use of findings.
A.7
Report in sufficient detail the
employed approach(es) and
methods to allow others to
understand, interpret, and
critique the evaluation
process and findings
A.8
In reporting, make clear the
limitations of the evaluation
process and findings
25. 11
A.9
In communicating evaluation
methods and approaches,
discuss in a contextually
appropriate way those
values, assumptions,
theories, methods, analyses,
results, and analyses
significantly affecting the
interpretation of the
evaluative findings
Form 5: Quantitative Analysis for Guiding Principle A–SYSTEMATIC INQUIRY
Caveat: It is problematic to do any kind of precise quantitative analysis of ratings drawn from the Guiding Principles and this checklist,
because the relative importance of different checkpoints can vary across evaluations, some checkpoints will not be applicable in given
evaluations, and the authors of the Guiding Principles provided considerably less detail for some principles than others. There is thus no
basis for defining one set of cut scores to divide such criterial concepts as Poor, Marginal, Moderate, Good, and Excellent. The following
quantitative analysis procedure is provided only as a rough guide and illustration for exploring the quantitative rating matter. This
procedure may be useful in some cases, but not others. Users are advised to apply the procedure with caution and where it clearly would
be misleading not to apply it at all.
To apply this procedure to quantify the target evaluation’s merit in fulfilling Principle A, carry out the following steps and record your
answer in the space at the right of each step.
Procedure Answer
1. Proceed with this analysis only if all checkpoints for this principle marked * as a minimum
requirement have been met (marked +).
2. Determine the number of applicable indicators associated with Principle A by subtracting the number
of Principle A indicators marked NA from the total number of Principle A indicators (9).
3. If the number of indicators marked + or - is less than 8, abort the quantitative analysis and proceed to
the qualitative analysis.
4. Assess whether the following cut scores are acceptable: and defensible for interpreting the value
meaning of the score for Principle A. Indicate your decision by placing a checkmark in the
appropriate space to the right. Write the rationale for your decision on this matter in the space to the
right.
• for all 9 checkpoints: [0-4: Poor, 5: Marginal, 6: Moderate, 7: Good, 8 or 9: Excellent]4
• for 8 applicable checkpoints: [0-4: Poor, 5: Marginal, 6: Moderate, 7: Good, 8: Excellent
Acceptable
Not Acceptable
Rationale:
5. Determine the rating for the evaluation on Principle A by summing the number of principles met and
adding a 0. For example, a score of 8 would receive a rating of 80.
26. 12
6. If you disagree with the cut scores in 4 above, provide the ones you will use here. In either case,
record your rating and quality designation (poor, marginal, moderate, good, or excellent) of the evaluation
in the space at the right. Also, provide your rationale for the new cut scores below:
Rating: ____
Quality Designation:
Rationale:
27. 13
Form 6: Qualitative Summary for Guiding Principle A—SYSTEMATIC INQUIRY
Write your overall assessment of the evaluation’s compliance with the SYSTEMATIC INQUIRY Principle below.
28. 14
Form 7: Guiding Principle B—COMPETENCE
For each checkpoint, assign a NA, +, -, or ?
To meet the requirements to
provide COMPETENT
PERFORMANCE to
stakeholders, the evaluator or
evaluation team did:
Provide the
identification
number(s) and
page numbers of
any documents
(from the list on
Form 3) you used
to reach your
judgment.
Place a * below for
any checkpoint that
you judge to be a
minimum
requirement, i.e.,
essential for
meeting this
principle.
Provide noteworthy rationales for your
judgments of NA, +, -, or ?.
JUDGMENT SOURCE MINIMUM
REQUIREMENT
CHECKPOINTS
(*)
RATIONALE
B.1
Possess the education,
abilities, skills, and
experience appropriate to
competently and successfully
carry out the proposed
evaluation tasks.
B.2
Demonstrate a sufficient level
of cultural competence to
ensure recognition, accurate
interpretation, and respect for
diversity.
29. 15
B.3
___As appropriate, demonstrate
cultural competence by seeking
awareness of their own
culturally-based assumptions,
their understanding of the
worldviews of culturally-
different participants and
stakeholders in the
evaluation, and the use of
appropriate evaluation
strategies and skills in
working with culturally
different groups. (Diversity
may be in terms of race,
ethnicity, gender, religion,
socio-economics, or other
factors pertinent to the
evaluation context.)
B.4
Practice within the limits of
their professional training
and competence and, as
feasible, would have
declined an evaluation
assignment outside those
limits
B.5
___In the occasion of being
unable to decline an
evaluation assignment
transcending one’s
capabilities, make clear to
appropriate parties any
potential significant
limitations of the evaluation
B.6
___Make every effort to gain the
competence required to
successfully conduct the
evaluation directly or through
the assistance of other
appropriately qualified
evaluators
30. 16
B.7
Evidence a history of
continually seeking to
maintain and improve
evaluation competence—
through such means as
formal coursework and
workshops, self-study,
evaluations of one’s own
practice, and learning from
the practices of other
evaluators--in order to
deliver the highest level of
evaluation service.
31. 17
Form 8: Quantitative Analysis for Guiding Principle B–COMPETENCE
Caveat: It is problematic to do any kind of precise quantitative analysis of ratings drawn from the Guiding Principles and this checklist,
because the relative importance of different checkpoints can vary across evaluations, some checkpoints will not be applicable in given
evaluations, and the authors of the Guiding Principles provided considerably less detail for some principles than others. There is thus no
clear basis for defining one set of cut scores to divide such criterial concepts as Poor, Marginal, Moderate, Good, and Excellent. The
following quantitative analysis procedures is provided only as a rough guide for exploring the quantitative rating matter. This procedure
may be quite useful in some cases, but not in others. Users are advised to use the procedure with caution and where it clearly would be
misleading not to use it at all.
To employ this procedure to quantify the target evaluation’s merit in fulfilling Principle B, carry out the following steps and record your
answer in the space at the right of each step.
1. Proceed with this analysis only if all checkpoints for this principle marked * as a minimum
requirement have been met (marked +).
2. Determine the number of applicable indicators associated with Principle B by subtracting the number
of Principle B indicators marked NA from the total number of Principle B indicators (7).
3. If the number of indicators marked + or - is less than 6, abort the quantitative analysis and proceed to
the qualitative analysis.
4. Assess whether the following cut scores are acceptable: and defensible for interpreting the value
meaning of the score for Principle B. Indicate your decision by placing a checkmark in the
appropriate space to the right. Write the rationale for your decision on this matter in the space to the
right.
• for all 7 checkpoints: [0-2: Poor, 3: Marginal, 4: Moderate, 5: Good, 6 or 7: Excellent
• for 6 applicable checkpoints: [0-3: Poor, 4: Marginal to: Moderate, 5: Good, 6: Excellent ]
Acceptable
Not Acceptable
Rationale:
5. Determine the rating for the evaluation on Principle A by summing the number of principles met and
adding a 0. For example, a score of 7 would receive a rating of 70.
6. If you disagree with the cut scores in 4 above, provide the ones you will use here. In either case,
record your rating and quality designation (poor, marginal, moderate, good, or excellent) of the
evaluation in the space at the right. Also, provide your rationale for the new cut scores below..
Rating: _
Quality Designation:
Rationale:
32. 18
Form 9: Qualitative Summary for Guiding Principle B—COMPETENCE
Write your overall assessment of the evaluation’s compliance with the COMPETENCE Principle below.
33. 19
Form 10: Guiding Principle C—INTEGRITY/HONESTY
For each checkpoint, assign a NA, +, -, or ?
To ensure the INTEGRITY
and HONESTY of their
own behavior and the
entire evaluation process,
the evaluator or evaluators
did:
Provide the
identification
number(s) and page
numbers of any
documents (from the
list on Form 3) you
used to reach your
judgment.
Place a *
below for
any checkpoint that
you judge to be a
minimum
requirement, i.e.,
essential for meeting
this principle.
Provide noteworthy rationales for your
judgments of NA, +, -, or ?.
JUDGMENT SOURCE MINIMUM
REQUIREMENT
CHECKPOINTS (*)
RATIONALE
C.1
___Take the initiative in
negotiating all
aspects of the
evaluation with clients
and representative
stakeholders
C.2
___Negotiate honestly
with clients and
relevant stakeholders
concerning the
evaluation tasks
C.3
___Negotiate honestly
with clients and
representative
stakeholders
concerning the
limitations of the
selected methods
C.4
___Negotiate honestly
with clients and
representative
stakeholders
concerning the scope
of results likely to be
obtained
34. 20
C.5
___Negotiate honestly
with clients and
representative
stakeholders
concerning
appropriate uses of
the evaluation’s data
C.6
___Negotiate honestly
with clients and
relevant stakeholders
concerning the costs
of the evaluation
C.7
___As appropriate,
forewarn the client
and relevant
stakeholders of any
contemplated
procedures or
activities that likely
would produce
misleading
evaluative
information or
conclusions
C.8
___Make appropriate
efforts to resolve
concerns about any
procedures or
activities that likely
would produce
misleading evaluative
information or
conclusions
C.9
___As feasible and
appropriate decline
to conduct the
evaluation if
important concerns
cannot be resolved
35. 21
C.10
___If declining a
problematic
assignment was not
feasible, consult
colleagues or
representative
stakeholders about
other proper ways to
proceed, such as
discussions at a higher
level, a dissenting
statement, or refusal
to sign the final
document
C.11
___Prior to accepting the
evaluation
assignment, disclose
any real or apparent
conflicts of interest
with their role as
evaluator
C.12
___In proceeding with the
evaluation, report
clearly any
conflicts of interest
they had and how
these were addressed
C.13
___Explicate their own,
their clients’, and
other stakeholders’
interests and values
concerning the
conduct and
outcomes of the
evaluation
36. 22
C.14
___Before changing the
negotiated
evaluation plans in
ways to significantly
affect the scope and
likely results of the
evaluation, inform,
as appropriate, the
client and other
important
stakeholders in a
timely fashion of
the changes and
their likely impact
C.15
___Record, explain, and
report all changes
made in the
originally negotiated
plans
C.16
___Provide the clients and
stakeholders with
valid
representations of
the evaluation
procedures
C.17
___Provide the clients and
stakeholders with
valid
representations of
the evaluation data
and findings
C.19
___Within reasonable
limits, take steps to
prevent or correct
misuse of their
work by others
C.20
___Disclose all sources of
financial support
for the evaluation
38. 24
Form 11: Quantitative Analysis for Guiding Principle C–INTEGRITY AND HONESTY
Caveat: It is problematic to do any kind of precise quantitative analysis of ratings drawn from the Guiding Principles and this checklist,
because the importance and applicability of checkpoints varies for different evaluations, some checkpoints will not be applicable in given
evaluations, and there is no one consistent basis for setting cut scores to divide the criterial concepts of Poor, Marginal, Moderate, Good,
and Excellent. The following quantitative analysis procedures is provided only as a rough guide for exploring the quantitative rating
matter. This procedure may be useful in some cases, but not in others. Users are advised to apply the procedure with caution and where it
clearly would be misleading not to apply it at all.
To employ this procedure to quantify the target evaluation’s merit in fulfilling Principle C, carry out the following steps and record your
answer in the space at the right of each step.
2. Proceed with this analysis only if all checkpoints for this principle marked * as a minimum
requirement have been met (marked +).
2. Determine the number of applicable indicators associated with Principle C by subtracting the
number of Principle C indicators marked NA from the total number of Principle C indicators (16).
3. If the number of indicators marked + or - is less than 12, abort the quantitative analysis and
proceed to the qualitative analysis.
4. Determine the percent of Principle C applicable indicators that the target evaluation passed by
dividing the number of indicators marked with a plus (+) by the number of indicators not marked
NA.
5. Determine the score for Principle C by multiplying the percent of Principle C applicable indicators
marked with a + by 100.
6. Assess whether the following cut scores [0-39: Poor, 40-59: Marginal, 60-79: Moderate, 80-92:
Good, 93-100: Excellent] are acceptable and defensible for interpreting the value meaning of the
score for Principle C. Indicate your decision by placing a checkmark in the appropriate space to
the right. Write your rationale for your decision on this matter below.
___Acceptable
___Not Acceptable
7. If you disagree with the cut scores in 6 above, provide the ones you will use here. In either case
record the rating and quality designation (poor, marginal, moderate, good, or excellent) of the
evaluation in the space at the right. Also, provide your rationale for the new cut scores below.
Rating:
Quality Designation:
Rationale:
39. 25
Form 12: Qualitative Summary for Guiding Principle C–INTEGRITY/HONESTY
Write your overall assessment of the evaluation’s compliance with the INTEGRITY/HONESTY Principle below.
40. 26
Form 13: Guiding Principle D—RESPECT FOR PEOPLE
For each checkpoint, assign a NA, +, -, or ?
To meet the requirements for
RESPECTING THE
SECURITY, DIGNITY, AND
SELF-WORTH OF THE
EVALUATION’S
RESPONDENTS, CLIENTS,
AND OTHER EVALUATION
STAKEHOLDERS, the
evaluator or evaluators did:
Provide the
identification
number(s) and
page numbers
of any
documents
(from the list
on Form 3)
you used to
reach your
judgment.
Place a *
below for
any checkpoint that
you judge to be a
minimum
requirement, i.e.,
essential for
meeting this
principle.
Provide noteworthy rationales for your
judgments of NA, +, -, or ?.
JUDGMENT SOURCE MINIMUM
REQUIREMENT
CHECKPOINTS
(*)
RATIONALE
D.1
___Develop a comprehensive
understanding of the
evaluation’s contextual
elements, including, as
appropriate, geographic
location, timing, political
and social climate,
economic conditions, and
other relevant dynamics in
progress at the same time
D.2
Abide by current
professional ethics, standards,
and regulations regarding risks,
harms, and burdens that might
befall participants in the
evaluation
D.3
___ Abide by current
professional ethics,
standards, and regulations
regarding informed
consent of evaluation
participants
41. 27
D.4
___Abide by current
professional ethics,
standards, and regulations
regarding informing
participants and clients
about the scope and limits
of confidentiality
D.5
___In the event of having to state
negative or critical
conclusions that could harm
client or stakeholder
interests, seek in
appropriate ways to
maximize the benefits and
reduce any unnecessary
harms that might occur
D.6
___In seeking to maximize the
benefits and reduce any
unnecessary harms, guard
against compromising the
evaluation’s integrity
D.7
___As appropriate, act in
accordance with justified
conclusions that the
benefits from doing the
evaluation or in
performing certain
evaluation procedures
should be foregone
because of the risks and
harms
42. 28
D.8
To the extent possible,
seriously consider during
the negotiation the
evaluation’s possible
risks and harms and what
to do about them
D.9
Conduct the evaluation and
communicate the results in
a way that clearly respects
each stakeholder’s
dignity and self-worth
D.10
As feasible, act to assure
that evaluation
participants benefit in
return
D.11
___Assure that evaluation
participants had full
knowledge of and
opportunity to obtain
any benefits of the
evaluation
D.12
___Assure that program
participants were
informed that their
eligibility to receive
services did not hinge on
their participation in the
evaluation
43. 29
D.13
Seek to ensure that those
asked to contribute data
and/or incur risks do so
willingly
D.14
Become acquainted with
and respect differences
among participants,
including their culture,
religion, gender, disability,
age, sexual orientation and
ethnicity
D.15
Take into account
participants’ differences
when planning,
conducting, analyzing, and
reporting the evaluation
D.16
Foster the evaluation’s
social equity by providing,
as appropriate, feedback
and other pertinent
benefits to the evaluation’s
contributors
44. 30
Form 14: Quantitative Analysis for Guiding Principle D–RESPECT FOR PEOPLE
Caveat: It is problematic to do any kind of precise quantitative analysis of ratings drawn from the Guiding Principles and this checklist,
because some criteria are more important than others, many criteria will not be applicable in given evaluations, the importance and
applicability of checkpoints vary for different evaluations, and there is no clear basis for setting cut scores to divide the criterial concepts of
Poor, Marginal, Moderate, Good, and Excellent. The following quantitative analysis procedure is provided only as a rough guide for
exploring the quantitative rating matter. This procedure may be useful in some cases, but not in others. Users are advised to apply the
procedure with caution and where it clearly would be misleading not to apply it at all.
To employ this procedure to quantify the target evaluation’s merit in fulfilling Principle D, carry out the following steps and record your
answer in the space at the right of each step.
7. Proceed with this analysis only if all checkpoints for this principle marked * as a minimum requirement
have been met (marked +).
2. Determine the number of applicable indicators associated with Principle D by subtracting the number of
Principle D indicators marked NA from the total number of Principle D indicators (16).
3. If the number of indicators marked + or - is less than 10, abort the quantitative analysis and proceed to
the qualitative analysis.
4. Determine the percent of Principle D applicable indicators that the target evaluation passed by dividing
the number of indicators marked with a plus (+) by the number of indicators not marked NA.
5. Determine the score for Principle D by multiplying the percent of Principle D applicable indicators
marked with a + by 100.
6. Assess whether the following cut scores [0-39: Poor, 40-59: Marginal, 60-79: Moderate, 80-92: Good,
93-100: Excellent] are acceptable and defensible for interpreting the value meaning of the score for
Principle D. Indicate your decision by placing a checkmark in the appropriate space to the right. Write
your rationale for your decision on this matter here:
Acceptable
Not Acceptable
7. If you disagree with the cut scores in 6. above, provide the ones you will use here. In either case,
record your rating and quality designation (poor, marginal, moderate, good, or excellent) of the
evaluation in the space at the right:. Also, provide your rationale for the new cut scores below:
Rating:
Quality Designation:
Rationale:
45. 31
Form 15: Qualitative Summary for Guiding Principle D—RESPECT FOR PEOPLE
Write your overall assessment of the evaluation’s compliance with the RESPECT FOR PEOPLE Principle below.
46. 32
Form 16: Guiding Principle E—RESPONSIBILITIES FOR GENERAL AND PUBLIC WELFARE
For each checkpoint, assign a NA, +, -, or ?
To articulate and take into
account the diversity of general
and public interests and values
that may be related to the
evaluation, the evaluator or
evaluators did:
Provide the
identification
number(s) and
page numbers
of any
documents
(from the list
on Form 3)
you used to
reach your
judgment.
Place a *
below for
any checkpoint that
you judge to be a
minimum
requirement, i.e.,
essential for
meeting this
principle.
Provide noteworthy rationales for your
judgments of NA, +, -, or ?.
JUDGMENT SOURCE MINIMUM
REQUIREMENT
CHECKPOINTS
(*)
RATIONALE
E.1
Present evaluation plans and
reports that include, as
appropriate, relevant
perspectives and interests
of the full range of
stakeholders
E.2
Consider not only the
immediate operations and
outcomes of the evaluand,
but also its broad
assumptions, implications,
and potential side effects
E.3
Follow the precepts of
freedom of information by
allowing all relevant
stakeholders access to
evaluative information in
forms that respect people
and honor promises of
confidentiality
47. 33
E.4
As resources allow, actively
disseminate findings to
stakeholders
E.5
___ As appropriate, tailor
different reports to the
needs and interests of
different right-to-know
audiences
E.6
In tailoring reports for
specific audiences, include
all results that may bear on
their interests and refer to
any other tailored
communications to other
stakeholders
E.7
Report clearly and simply
so that clients and other
stakeholders could easily
understand the evaluation
process and results
E.7
Maintain an appropriate
balance between meeting
client needs and other needs
48. 34
E.8
Effectively address
legitimate client needs
without compromising
ethical and methodological
principles
E.9
As needed, effectively and
ethically address any threats
to the evaluation’s integrity,
e.g., ones associated with
inappropriate client
requests or political
conflicts
E.10
As needed, forthrightly
identify and discuss
conflicts with the client and
stakeholders, resolve the
conflicts if possible, or, as
feasible, abort the
evaluation if a serious
conflict cannot be resolved
E.11
If a serious conflict could not
be resolved and the
evaluation could not be
terminated, make clear the
negative consequences for
the evaluation
49. 35
E.12
In conducting the evaluation,
take all appropriate steps to
counter any clear threats,
associated with the
evaluation, to the public
interest and good
E. 13
Analyze and convey findings
in terms of the welfare of
society as a whole as well
as the interests of the client
and other relevant
stakeholder groups
50. 36
Form 17: Quantitative Analysis for Guiding Principle E–GENERAL AND PUBLIC WELFARE
Caveat: It is problematic to do any kind of precise quantitative analysis of ratings drawn from the Guiding Principles and this checklist,
because some checkpoints are more important than others, many checklists will not be applicable in given evaluations, the checkpoints
vary in importance and applicability across evaluations, and there is no one basis for setting cut scores to divide the criterial concepts of
Poor, Marginal, Moderate, Good, and Excellent. The following quantitative analysis procedure is provided only as a rough guide for
exploring the quantitative rating matter. This procedure may be useful in some cases, but not others. Users are advised to apply the
procedure with caution and where it clearly would be misleading not to apply it at all.
To employ this procedure to quantify the target evaluation’s merit in fulfilling Principle A, carry out the following steps and record your
answer in the space at the right of each step.
1. Proceed with this analysis only if all checkpoints for this principle marked * as a minimum
requirement have been met (marked +).
2. Determine the number of applicable indicators associated with Principle E by subtracting the number
of Principle E indicators marked NA from the total number of Principle E indicators (13).
3. If the number of indicators marked + or - is less than 10, abort the quantitative analysis and proceed
to the qualitative analysis.
4. Determine the percent of Principle E applicable indicators that the target evaluation passed by
dividing the number of indicators marked with a plus (+) by the number of indicators not marked
NA.
5. Determine the score for Principle E by multiplying the percent of Principle E applicable indicators
marked with a + by 100.
6. Assess whether the following cut scores [0-39: Poor, 40-59: Marginal, 60-79: Moderate, 80-92: Good,
93-100: Excellent] are acceptable and defensible for interpreting the value meaning of the score for
Principle E. Indicate your decision by placing a checkmark in the appropriate space to the right.
Write your rationale for your decision on this matter here:
Acceptable
Not Acceptable
7. If you disagree with the cut scores in 6 above, provide the ones you will use here. In either case,
record the rating of the evaluation and quality designation (poor, marginal, moderate, good, or
excellent) in the space at the right. Also, provide your rationale for the new cut scores below:
Rating:
Quality Designation:
Rationale:
51. 37
Form 18: Qualitative Summary for Guiding Principle E—RESPONSIBILITIES FOR
GENERAL AND PUBLIC WELFARE
Write your overall assessment of the evaluation’s compliance with the RESPONSIBILITIES FOR GENERAL AND
PUBLIC WELFARE Principle below.
53. 39
Form 20: Summary Quantitative Evaluation of the Target Evaluation
Caveat: It is problematic to do any kind of precise quantitative analysis of ratings drawn from the Guiding Principles and this checklist,
because the relative importance of different checklists varies across different evaluations, many checkpoints will not be applicable in given
evaluations, and there is no single basis for setting cut scores that divide the criterial concepts of Poor, Marginal, Moderate, Good, and
Excellent. The following quantitative analysis procedure is provided only as a rough guide for exploring the quantitative rating matter.
This procedure may be useful in some cases, but not others. Users are advised to apply the procedure with caution and where it clearly
would be misleading not to apply it at all.
To apply this procedure to quantify the target evaluation’s merit in fulfilling all 5 AEA Guiding Principles, carry out the following steps
and, as appropriate, record your answers in the space at the right of each step.
1.___Proceed with this analysis only if you obtained ratings (in Forms 5, 8, 11, 14, and 17) for all 5
principles in accordance with the instructions given for those forms.
2___If the target evaluation rated Poor on any principle, judge the evaluation a failure regardless of its
ratings on the other principles.
3___If the target evaluation rated Marginal or higher on all 5 principles, determine the evaluation’s
overall score by summing the 5 scores and dividing by 5.
4___Assess whether the following cut scores [0-39: Poor, 40-59: Marginal, 60-79: Moderate, 80-92:
Good, 93-100: Excellent]
8
are acceptable and defensible for interpreting the value meaning of the
score for the overall evaluation. Indicate your decision by placing a checkmark in the appropriate
space to the right. Write your rationale for your decision on this matter here:
Acceptable
Not Acceptable
8
The rationale for this set of cut scores is focused mainly on the top and bottom judgment categories.
Any evaluation that met less than 25 percent of checkpoints overall would provide a poor basis for decision making.
An evaluation that met 93 percent or more of the checkpoints would be excellent, so long as no checkpoint judged
to be a minimum requirement was failed. Meeting 80 - 92 percent of the checkpoints would seem to provide a
probably good basis for decision making, again assuming that no minimum requirement checkpoint was missed. An
evaluation that scored in the moderate range (meeting 60 - 79 percent of the checkpoints) would not be considered
good, but also not disastrous if no minimum requirement checkpoints were missed. Summative metaevaluations
should seek to credit evaluations that fall in the excellent range and discourage use of those that fall in the poor and
marginal ranges (0 - 59 percent of the checkpoints met). Formative metaevaluations should seek to help strengthen
especially those evaluations that fall in the moderate and good ranges.
54. 40
5. If you disagree with the cut scores in 4. above, provide the ones you will use here. In either case,
record the rating and quality designation (poor, marginal, moderate, good, or excellent) of the
evaluation in the space at the right. Also, provide your rationale for the new cut scores below:
Overall Rating:
Overall Quality Designation:
Rationale:
55. 41
Form 21: Summary Qualitative Evaluation of the Target evaluation
Assess the target evaluation’s overall merit, taking into account pertinent caveats.
56. 42
Form 22: Overall Summary Evaluation of the Target Evaluation
Taking account of all the preceding analyses, provide your overall summary judgment of the target evaluation by
placing checkmarks in the appropriate cells below.
RATINGSPRINCIPLE
Poor Marginal Moderate Good Excellent
A. Systematic Inquiry
B. Competence
C. Integrity/Honesty
D. Respect for People
E. Responsibilities for
General and Public Welfare
Overall
SUPPLEMENTARY COMMENTS: In the space below state any general points, caveats, etc. that readers should
keep in mind as they consider the preceding bottom-line judgments.
57. 43
Form 23: Attestation
To the best of my/our ability, the above analysis, judgments, syntheses, and overall assessment provide a sound
evaluation of the target evaluation based on the American Evaluation Association’s 2004 Guiding Principles for
Evaluators.
Name(s) (print):
(sign): Date:
(print):
(sign): Date:
(print):
(sign): Date:
(print):
(sign): Date:
(print):
(sign): Date:
58. 44
Reference
Joint Committee on Standards for Educational Evaluation (1994). The program evaluation standards. Thousand
Oaks, CA: Sage.
Endnotes
1
American Evaluation Association, 2004. Guiding principles for evaluators. www.eval.org/Guiding%20Principles
2
. Stufflebeam, D. L. (September, 2001). Guiding Principles Checklist. Kalamazoo, MI: The Evaluation Center,
www.wmich.edu/evalctr/checklists
3
The AEA Task Force that developed the original 1995 Guiding Principles included William Shadish, Dianna
Newman, Mary Ann Scheirer, and Christopher Wye. The 2004 revision of the Principles was prepared the 2002 and
2003 AEA Ethics Committees, whose collective membership included Gail Barrington, Deborah Bonnet, Elmima
Johnson, Anna Madison, Doris Redfield, and Katherine Ryan.
4
. For each principle and all five combined cut scores for any evaluation that scored less than 40 would be
considered a poor basis for conclusions and decisions. An evaluation that scored 93 to 100 would be judged
excellent, so long as it passed all the minimum requirement standards. A score in the 80 to 92 range would seem to
provide a quite good basis for conclusions and decisions, again assuming that no minimum standard was missed. An
evaluation that scored in the marginal 40 to 59) and moderate (60 to 79) ranges would not be considered good, but
also not disastrous if no minimum requirement checkpoints were missed. Summative metaevaluations should seek
to credit evaluations that fall in the excellent range and discourage use of those that fall in the poor and marginal
range. Formative metaevaluations should seek to help strengthen evaluations, especially those that fall in the
marginal and moderate ranges.
59. Evaluation Checklists Project
www.wmich.edu/evalctr/checklists
PROGRAM EVALUATION MODELS METAEVALUATION CHECKLIST
(Based on The Program Evaluation Standards)
Daniel L. Stufflebeam
1999
This checklist is for performing metaevaluations of program evaluation models. It is organized according to the
Joint Committee Program Evaluation Standards. For each of the 30 standards the checklist includes 10
checkpoints drawn from the substance of the standard. It is suggested that each standard be scored on each
checkpoint. Then judgments about the adequacy of the subject evaluation model in meeting the standard can
be made as follows: 0-2 Poor, 3-4 Fair, 5-6 Good, 7-8 Very Good, 9-10 Excellent. It is recommended that an
evaluation model be failed if it scores Poor on standards P1 Service Orientation, A5 Valid Information, A10
Justified Conclusions, or A11 Impartial Reporting. Users of this checklist are advised to consult the full text of
The Joint Committee (1994) Program Evaluation Standards, Thousand Oaks, CA: Sage Publications.
To meet the requirements for Utility, evaluations using the evaluation model should:
U1 Stakeholder Identification
! Clearly identify the evaluation client
! Engage leadership figures to identify other stakeholders
! Consult potential stakeholders to identify their information needs
! Use stakeholders to identify other stakeholders
! With the client, rank stakeholders for relative importance
! Arrange to involve stakeholders throughout the evaluation
! Keep the evaluation open to serve newly identified stakeholders
! Address stakeholders' evaluation needs
! Serve an appropriate range of individual stakeholders
! Serve an appropriate range of stakeholder organizations
! 9-10 Excellent ! 7-8 Very Good ! 5-6 Good ! 3-4 Fair ! 0-2 Poor
U2 Evaluator Credibility
! Engage competent evaluators
! Engage evaluators whom the stakeholders trust
! Engage evaluators who can address stakeholders= concerns
! Engage evaluators who are appropriately responsive to issues of gender, socioeconomic status, race, and
language and cultural differences
! Assure that the evaluation plan responds to key stakeholders= concerns
! Help stakeholders understand the evaluation plan
! Give stakeholders information on the evaluation plan=s technical quality and practicality
! Attend appropriately to stakeholders= criticisms and suggestions
! Stay abreast of social and political forces
! Keep interested parties informed about the evaluation=s progress
! 9-10 Excellent ! 7-8 Very Good ! 5-6 Good ! 3-4 Fair ! 0-2 Poor
U3 Information Scope and Selection
! Understand the client=s most important evaluation requirements
! Interview stakeholders to determine their different perspectives
! Assure that evaluator and client negotiate pertinent audiences, questions, and required information
! Assign priority to the most important stakeholders
60. Program Evaluation Models Metaevaluation Checklist 2
! Assign priority to the most important questions
! Allow flexibility for adding questions during the evaluation
! Obtain sufficient information to address the stakeholders= most important evaluation questions
! Obtain sufficient information to assess the program=s merit
! Obtain sufficient information to assess the program=s worth
! Allocate the evaluation effort in accordance with the priorities assigned to the needed information
! 9-10 Excellent ! 7-8 Very Good ! 5-6 Good ! 3-4 Fair ! 0-2 Poor
U4 Values Identification
! Consider alternative sources of values for interpreting evaluation findings
! Provide a clear, defensible basis for value judgments
! Determine the appropriate party(s) to make the valuational interpretations
! Identify pertinent societal needs
! Identify pertinent customer needs
! Reference pertinent laws
! Reference, as appropriate, the relevant institutional mission
! Reference the program=s goals
! Take into account the stakeholders= values
! As appropriate, present alternative interpretations based on conflicting but credible value bases
! 9-10 Excellent ! 7-8 Very Good ! 5-6 Good ! 3-4 Fair ! 0-2 Poor
U5 Report Clarity
! Clearly report the essential information
! Issue brief, simple, and direct reports
! Focus reports on contracted questions
! Describe the program and its context
! Describe the evaluation=s purposes, procedures, and findings
! Support conclusions and recommendations
! Avoid reporting technical jargon
! Report in the language(s) of stakeholders
! Provide an executive summary
! Provide a technical report
! 9-10 Excellent ! 7-8 Very Good ! 5-6 Good ! 3-4 Fair ! 0-2 Poor
U6 Report Timeliness and Dissemination
! Make timely interim reports to intended users
! Deliver the final report when it is needed
! Have timely exchanges with the program=s policy board
! Have timely exchanges with the program=s staff
! Have timely exchanges with the program=s customers
! Have timely exchanges with the public media
! Have timely exchanges with the full range of right-to-know audiences
! Employ effective media for reaching and informing the different audiences
! Keep the presentations appropriately brief
! Use examples to help audiences relate the findings to practical situations
! 9-10 Excellent ! 7-8 Very Good ! 5-6 Good ! 3-4 Fair ! 0-2 Poor
61. Program Evaluation Models Metaevaluation Checklist 3
U7 Evaluation Impact
! Maintain contact with audience
! Involve stakeholders throughout the evaluation
! Encourage and support stakeholders= use of the findings
! Show stakeholders how they might use the findings in their work
! Forecast and address potential uses of findings
! Provide interim reports
! Make sure that reports are open, frank, and concrete
! Supplement written reports with ongoing oral communication
! Conduct feedback workshops to go over and apply findings
! Make arrangements to provide follow-up assistance in interpreting and applying the findings
! 9-10 Excellent ! 7-8 Very Good ! 5-6 Good ! 3-4 Fair ! 0-2 Poor
Scoring the Evaluation for UTILITY
Add the following:
Strength of the model’s provisions for UTILITY:
Number of Excellent ratings (0-7) x 4 = ! 26 (93%) to 28: Excellent
Number of Very Good (0-7) x 3 = ! 19 (68%) to 25: Very Good
Number of Good (0-7) x 2 = ! 14 (50%) to 18: Good
Number of Fair (0-7) x 1 = ! 7 (25%) to 13: Fair
Total score: = ! 0 (0%) to 5: Poor
_____ (Total score) ÷28 = _____ x 100 = _____
To meet the requirements for feasibility, evaluations using the evaluation model should:
F1 Practical Procedures
! Tailor methods and instruments to information requirements
! Minimize disruption
! Minimize the data burden
! Appoint competent staff
! Train staff
! Choose procedures that the staff are qualified to carry out
! Choose procedures in light of known constraints
! Make a realistic schedule
! Engage locals to help conduct the evaluation
! As appropriate, make evaluation procedures a part of routine events
! 9-10 Excellent ! 7-8 Very Good ! 5-6 Good ! 3-4 Fair ! 0-2 Poor
F2 Political Viability
! Anticipate different positions of different interest groups
! Avert or counteract attempts to bias or misapply the findings
! Foster cooperation
! Involve stakeholders throughout the evaluation
! Agree on editorial and dissemination authority
! Issue interim reports
! Report divergent views
! Report to right-to-know audiences
! Employ a firm public contract
62. Program Evaluation Models Metaevaluation Checklist 4
! Terminate any corrupted evaluation
! 9-10 Excellent ! 7-8 Very Good ! 5-6 Good ! 3-4 Fair ! 0-2 Poor
F3 Cost Effectiveness
! Be efficient
! Make use of in-kind services
! Produce information worth the investment
! Inform decisions
! Foster program improvement
! Provide accountability information
! Generate new insights
! Help spread effective practices
! Minimize disruptions
! Minimize time demands on program personnel
! 9-10 Excellent ! 7-8 Very Good ! 5-6 Good ! 3-4 Fair ! 0-2 Poor
Scoring the Evaluation for FEASIBILITY
Add the following:
Strength of the model’s provisions for
FEASIBILITY
Number of Excellent ratings 0-3) x 4 = ! 11 (93%) to 12: Excellent
Number of Very Good (0-3) x 3 = ! 8 (68%) to 10: Very Good
Number of Good (0-3) x 2 = ! 6 (50%) to 7: Good
Number of Fair (0-3 x 1 = ! 3 (25%) to 5: Fair
Total score: = ! 0 (0%) to 2: Poor
_____ (Total score) ÷ 12 = _____ x 100 =
To meet the requirements for Propriety, evaluations using the evaluation model should:
P1 Service Orientation
! Assess needs of the program=s customers
! Assess program outcomes against targeted customers= assessed needs
! Help assure that the full range of rightful program beneficiaries are served
! Promote excellent service
! Make the evaluation=s service orientation clear to stakeholders
! Identify program strengths to build on
! Identify program weaknesses to correct
! Give interim feedback for program improvement
! Expose harmful practices
! Inform all right-to-know audiences of the program=s positive and negative outcomes
! 9-10 Excellent ! 7-8 Very Good ! 5-6 Good ! 3-4 Fair ! 0-2 Poor
P2 Formal Agreements, reach advance written agreements on:
! Evaluation purpose and questions
! Audiences
! Evaluation reports
! Editing
! Release of reports
! Evaluation procedures and schedule
63. Program Evaluation Models Metaevaluation Checklist 5
! Confidentiality/anonymity of data
! Evaluation staff
! Metaevaluation
! Evaluation resources
! 9-10 Excellent ! 7-8 Very Good ! 5-6 Good ! 3-4 Fair ! 0-2 Poor
P3 Rights of Human Subjects
! Make clear to stakeholders that the evaluation will respect and protect the rights of human subjects
! Clarify intended uses of the evaluation
! Keep stakeholders informed
! Follow due process
! Uphold civil rights
! Understand participant values
! Respect diversity
! Follow protocol
! Honor confidentiality/anonymity agreements
! Do no harm
! 9-10 Excellent ! 7-8 Very Good ! 5-6 Good ! 3-4 Fair ! 0-2 Poor
P4 Human Interactions
! Consistently relate to all stakeholders in a professional manner
! Maintain effective communication with stakeholders
! Follow the institution=s protocol
! Minimize disruption
! Honor participants= privacy rights
! Honor time commitments
! Be alert to and address participants= concerns about the evaluation
! Be sensitive to participants= diversity of values and cultural differences
! Be even-handed in addressing different stakeholders
! Do not ignore or help cover up any participant=s incompetence, unethical behavior, fraud, waste, or abuse
! 9-10 Excellent ! 7-8 Very Good ! 5-6 Good ! 3-4 Fair ! 0-2 Poor
P5 Complete and Fair Assessment
! Assess and report the program=s strengths
! Assess and report the program=s weaknesses
! Report on intended outcomes
! Report on unintended outcomes
! Give a thorough account of the evaluation=s process
! As appropriate, show how the program=s strengths could be used to overcome its weaknesses
! Have the draft report reviewed
! Appropriately address criticisms of the draft report
! Acknowledge the final report=s limitations
! Estimate and report the effects of the evaluation=s limitations on the overall judgment of the program
! 9-10 Excellent ! 7-8 Very Good ! 5-6 Good ! 3-4 Fair ! 0-2 Poor
64. Program Evaluation Models Metaevaluation Checklist 6
P6 Disclosure of Findings
! Define the right-to-know audiences
! Establish a contractual basis for complying with right-to-know requirements
! Inform the audiences of the evaluation=s purposes and projected reports
! Report all findings in writing
! Report relevant points of view of both supporters and critics of the program
! Report balanced, informed conclusions and recommendations
! Show the basis for the conclusions and recommendations
! Disclose the evaluation=s limitations
! In reporting, adhere strictly to a code of directness, openness, and completeness
! Assure that reports reach their audiences
! 9-10 Excellent ! 7-8 Very Good ! 5-6 Good ! 3-4 Fair ! 0-2 Poor
P7 Conflict of Interest
! Identify potential conflicts of interest early in the evaluation
! Provide written, contractual safeguards against identified conflicts of interest
! Engage multiple evaluators
! Maintain evaluation records for independent review
! As appropriate, engage independent parties to assess the evaluation for its susceptibility or corruption by
conflicts of interest
! When appropriate, release evaluation procedures, data, and reports for public review
! Contract with the funding authority rather than the funded program
! Have internal evaluators report directly to the chief executive officer
! Report equitably to all right-to-know audiences
! Engage uniquely qualified persons to participate in the evaluation, even if they have a potential conflict of
interest; but take steps to counteract the conflict
! 9-10 Excellent ! 7-8 Very Good ! 5-6 Good ! 3-4 Fair ! 0-2 Poor
P8 Fiscal Responsibility
! Specify and budget for expense items in advance
! Keep the budget sufficiently flexible to permit appropriate reallocations to strengthen the evaluation
! Obtain appropriate approval for needed budgetary modifications
! Assign responsibility for managing the evaluation finances
! Maintain accurate records of sources of funding and expenditures
! Maintain adequate personnel records concerning job allocations and time spent on the job
! Employ comparison shopping for evaluation materials
! Employ comparison contract bidding
! Be frugal in expending evaluation resources
! As appropriate, include an expenditure summary as part of the public evaluation report
! 9-10 Excellent ! 7-8 Very Good ! 5-6 Good ! 3-4 Fair ! 0-2 Poor
65. Program Evaluation Models Metaevaluation Checklist 7
Scoring the Evaluation for PROPRIETY
Add the following:
Strength of the model’s provisions for
PROPRIETY
Number of Excellent ratings (0-8) x 4 = ! 30 (93%) to 32: Excellent
Number of Very Good (0-8) x 3 = ! 22 (68%) to 29: Very Good
Number of Good (0-8) x 2 = ! 16 (50%) to 21: Good
Number of Fair (0-8) x 1 = ! 8 (25%) to 15: Fair
Total score: = ! 0 (0%) to 7: Poor
_____ (Total score) ÷ 32 = ____ x 100 = ____
To meet the requirements for accuracy, evaluations using the evaluation model should:
A1 Program Documentation
! Collect descriptions of the intended program from various written sources
! Collect descriptions of the intended program from the client and various stakeholders
! Describe how the program was intended to function
! Maintain records from various sources of how the program operated
! As feasible, engage independent observers to describe the program=s actual operations
! Describe how the program actually functioned
! Analyze discrepancies between the various descriptions of how the program was intended to function
! Analyze discrepancies between how the program was intended to operate and how it actually operated
! Ask the client and various stakeholders to assess the accuracy of recorded descriptions of both the
intended and the actual program
! Produce a technical report that documents the program=s operations
! 9-10 Excellent ! 7-8 Very Good ! 5-6 Good ! 3-4 Fair ! 0-2 Poor
A2 Context Analysis
! Use multiple sources of information to describe the program=s context
! Describe the context=s technical, social, political, organizational, and economic features
! Maintain a log of unusual circumstances
! Record instances in which individuals or groups intentionally or otherwise interfered with the program
! Record instances in which individuals or groups intentionally or otherwise gave special assistance to the
program
! Analyze how the program=s context is similar to or different from contexts where the program might be
adopted
! Report those contextual influences that appeared to significantly influence the program and that might be
of interest to potential adopters
! Estimate effects of context on program outcomes
! Identify and describe any critical competitors to this program that functioned at the same time and in the
program=s environment
! Describe how people in the program=s general area perceived the program=s existence, importance, and
quality
! 9-10 Excellent ! 7-8 Very Good ! 5-6 Good ! 3-4 Fair ! 0-2 Poor
A3 Described Purposes and Procedures
! At the evaluation=s outset, record the client=s purposes for the evaluation
! Monitor and describe stakeholders= intended uses of evaluation findings
! Monitor and describe how the evaluation=s purposes stay the same or change over time
66. Program Evaluation Models Metaevaluation Checklist 8
! Identify and assess points of agreement and disagreement among stakeholders regarding the
evaluation=s purposes
! As appropriate, update evaluation procedures to accommodate changes in the evaluation=s purposes
! Record the actual evaluation procedures, as implemented
! When interpreting findings, take into account the different stakeholders= intended uses of the evaluation
! When interpreting findings, take into account the extent to which the intended procedures were effectively
executed
! Describe the evaluation=s purposes and procedures in the summary and full-length evaluation reports
! As feasible, engage independent evaluators to monitor and evaluate the evaluation=s purposes and
procedures
! 9-10 Excellent ! 7-8 Very Good ! 5-6 Good ! 3-4 Fair ! 0-2 Poor
A4 Defensible Information Sources
! Obtain information from a variety of sources
! Use pertinent, previously collected information once validated
! As appropriate, employ a variety of data collection methods
! Document and report information sources
! Document, justify, and report the criteria and methods used to select information sources
! For each source, define the population
! For each population, as appropriate, define any employed sample
! Document, justify, and report the means used to obtain information from each source
! Include data collection instruments in a technical appendix to the evaluation report
! Document and report any biasing features in the obtained information
! 9-10 Excellent ! 7-8 Very Good ! 5-6 Good ! 3-4 Fair ! 0-2 Poor
A5 Valid Information
! Focus the evaluation on key questions
! As appropriate, employ multiple measures to address each question
! Provide a detailed description of the constructs and behaviors about which information will be acquired
! Assess and report what type of information each employed procedure acquires
! Train and calibrate the data collectors
! Document and report the data collection conditions and process
! Document how information from each procedure was scored, analyzed, and interpreted
! Report and justify inferences singly and in combination
! Assess and report the comprehensiveness of the information provided by the procedures as a set in
relation to the information needed to answer the set of evaluation questions
!
Establish meaningful categories of information by identifying regular and recurrent themes in information
collected using qualitative assessment procedures
! 9-10 Excellent ! 7-8 Very Good ! 5-6 Good ! 3-4 Fair ! 0-2 Poor
A6 Reliable Information
! Identify and justify the type(s) and extent of reliability claimed
! For each employed data collection device, specify the unit of analysis
!
As feasible, choose measuring devices that in the past have shown acceptable levels of reliability for their
intended uses
! In reporting reliability of an instrument, assess and report the factors that influenced the reliability,
including the characteristics of the examinees, the data collection conditions, and the evaluator=s biases
! Check and report the consistency of scoring, categorization, and coding
! Train and calibrate scorers and analysts to produce consistent results
67. Program Evaluation Models Metaevaluation Checklist 9
! Pilot test new instruments in order to identify and control sources of error
! As appropriate, engage and check the consistency between multiple observers
! Acknowledge reliability problems in the final report
! Estimate and report the effects of unreliability in the data on the overall judgment of the program
! 9-10 Excellent ! 7-8 Very Good ! 5-6 Good ! 3-4 Fair ! 0-2 Poor
A7 Systematic Information
! Establish protocols for quality control of the evaluation information
! Train the evaluation staff to adhere to the data protocols
! Systematically check the accuracy of scoring and coding
! When feasible, use multiple evaluators and check the consistency of their work
! Verify data entry
! Proofread and verify data tables generated from computer output or other means
! Systematize and control storage of the evaluation information
! Define who will have access to the evaluation information
! Strictly control access to the evaluation information according to established protocols
! Have data providers verify the data they submitted
! 9-10 Excellent ! 7-8 Very Good ! 5-6 Good ! 3-4 Fair ! 0-2 Poor
A8 Analysis of Quantitative Information
!
Begin by conducting preliminary exploratory analyses to assure the data=s correctness and to gain a
greater understanding of the data
! Choose procedures appropriate for the evaluation questions and nature of the data
! For each procedure specify how its key assumptions are being met
! Report limitations of each analytic procedure, including failure to meet assumptions
! Employ multiple analytic procedures to check on consistency and replicability of findings
! Examine variability as well as central tendencies
! Identify and examine outliers and verify their correctness
! Identify and analyze statistical interactions
! Assess statistical significance and practical significance
! Use visual displays to clarify the presentation and interpretation of statistical results
! 9-10 Excellent ! 7-8 Very Good ! 5-6 Good ! 3-4 Fair ! 0-2 Poor
A9 Analysis of Qualitative Information
! Focus on key questions
! Define the boundaries of information to be used
! Obtain information keyed to the important evaluation questions
!
Verify the accuracy of findings by obtaining confirmatory evidence from multiple sources, including
stakeholders
! Choose analytic procedures and methods of summarization that are appropriate to the evaluation
questions and employed qualitative information
! Derive a set of categories that is sufficient to document, illuminate, and respond to the evaluation
questions
! Test the derived categories for reliability and validity
! Classify the obtained information into the validated analysis categories
! Derive conclusions and recommendations and demonstrate their meaningfulness
! Report limitations of the referenced information, analyses, and inferences
! 9-10 Excellent ! 7-8 Very Good ! 5-6 Good ! 3-4 Fair ! 0-2 Poor
68. Program Evaluation Models Metaevaluation Checklist 10
A10 Justified Conclusions
! Focus conclusions directly on the evaluation questions
! Accurately reflect the evaluation procedures and findings
! Limit conclusions to the applicable time periods, contexts, purposes, and activities
! Cite the information that supports each conclusion
! Identify and report the program=s side effects
! Report plausible alternative explanations of the findings
! Explain why rival explanations were rejected
! Warn against making common misinterpretations
! Obtain and address the results of a prerelease review of the draft evaluation report
! Report the evaluation=s limitations
! 9-10 Excellent ! 7-8 Very Good ! 5-6 Good ! 3-4 Fair ! 0-2 Poor
A11 Impartial Reporting
! Engage the client to determine steps to ensure fair, impartial reports
! Establish appropriate editorial authority
! Determine right-to-know audiences
! Establish and follow appropriate plans for releasing findings to all right-to-know audiences
! Safeguard reports from deliberate or inadvertent distortions
! Report perspectives of all stakeholder groups
! Report alternative plausible conclusions
! Obtain outside audits of reports
! Describe steps taken to control bias
! Participate in public presentations of the findings to help guard against and correct distortions by other
interested parties
! 9-10 Excellent ! 7-8 Very Good ! 5-6 Good ! 3-4 Fair ! 0-2 Poor
A12 Metaevaluation
! Designate or define the standards to be used in judging the evaluation
! Assign someone responsibility for documenting and assessing the evaluation process and products
! Employ both formative and summative metaevaluation
! Budget appropriately and sufficiently for conducting the metaevaluation
! Record the full range of information needed to judge the evaluation against the stipulated standards
! As feasible, contract for an independent metaevaluation
! Determine and record which audiences will receive the metaevaluation report
! Evaluate the instrumentation, data collection, data handling, coding, and analysis against the relevant
standards
! Evaluate the evaluation=s involvement of and communication of findings to stakeholders against the
relevant standards
! Maintain a record of all metaevaluation steps, information, and analyses
! 9-10 Excellent ! 7-8 Very Good ! 5-6 Good ! 3-4 Fair ! 0-2 Poor
69. Program Evaluation Models Metaevaluation Checklist 11
Scoring the Evaluation for ACCURACY
Add the following:
Strength of the model’s provisions for
ACCURACY
Number of Excellent ratings (0-12) x 4 = ! 45 (93%) to 48: Excellent
Number of Very Good (0-12) x 3 = ! 33 (68%) to 44: Very Good
Number of Good (0-12) x 2 = ! 24 (50%) to 32: Good
Number of Fair (0-12) x 1 = ! 12 (25%) to 23: Fair
Total score: = ! 0 (0%) to 11: Poor
_____ (Total score) ÷ 48 = _____ x 100 =
_____
This checklist is being provided as a free service to the user. The provider of the checklist has not modified or
adapted the checklist to fit the specific needs of the user and the user is executing his or her own discretion
and judgment in using the checklist. The provider of the checklist makes no representations or warranties
that this checklist is fit for the particular purpose contemplated by user and specifically disclaims any such
warranties or representations.
70. Evaluation Checklists Project
www.wmich.edu/evalctr/checklists
PROGRAM EVALUATIONS METAEVALUATION CHECKLIST
(Based on The Program Evaluation Standards)
Daniel L. Stufflebeam
1999
This checklist is for performing final, summative metaevaluations. It is organized according to the Joint
Committee Program Evaluation Standards. For each of the 30 standards the checklist includes 10 checkpoints
drawn from the substance of the standard. It is suggested that each standard be scored on each checkpoint.
Then judgments about the adequacy of the subject evaluation in meeting the standard can be made as follows:
0-2 Poor, 3-4 Fair, 5-6 Good, 7-8 Very Good, 9-10 Excellent. It is recommended that an evaluation be failed if it
scores Poor on standards P1 Service Orientation, A5 Valid Information, A10 Justified Conclusions, or A11
Impartial Reporting. Users of this checklist are advised to consult the full text of The Joint Committee (1994)
Program Evaluation Standards, Thousand Oaks, CA: Sage Publications.
TO MEET THE REQUIREMENTS FOR UTILITY, PROGRAM EVALUATIONS SHOULD:
U1 Stakeholder Identification
! Clearly identify the evaluation client
! Engage leadership figures to identify other stakeholders
! Consult potential stakeholders to identify their information needs
! Use stakeholders to identify other stakeholders
! With the client, rank stakeholders for relative importance
! Arrange to involve stakeholders throughout the evaluation
! Keep the evaluation open to serve newly identified stakeholders
! Address stakeholders' evaluation needs
! Serve an appropriate range of individual stakeholders
! Serve an appropriate range of stakeholder organizations
! 9-10 Excellent ! 7-8 Very Good ! 5-6 Good ! 3-4 Fair ! 0-2 Poor
U2 Evaluator Credibility
! Engage competent evaluators
! Engage evaluators whom the stakeholders trust
! Engage evaluators who can address stakeholders= concerns
! Engage evaluators who are appropriately responsive to issues of gender, socioeconomic status, race, and
language and cultural differences
! Assure that the evaluation plan responds to key stakeholders= concerns
! Help stakeholders understand the evaluation plan
! Give stakeholders information on the evaluation plan=s technical quality and practicality
! Attend appropriately to stakeholders= criticisms and suggestions
! Stay abreast of social and political forces
! Keep interested parties informed about the evaluation=s progress
! 9-10 Excellent ! 7-8 Very Good ! 5-6 Good ! 3-4 Fair ! 0-2 Poor
71. Program Evaluations Metaevaluation Checklist 2
U3 Information Scope and Selection
! Understand the client=s most important evaluation requirements
! Interview stakeholders to determine their different perspectives
! Assure that evaluator and client negotiate pertinent audiences, questions, and required information
! Assign priority to the most important stakeholders
! Assign priority to the most important questions
! Allow flexibility for adding questions during the evaluation
! Obtain sufficient information to address the stakeholders= most important evaluation questions
! Obtain sufficient information to assess the program=s merit
! Obtain sufficient information to assess the program=s worth
! Allocate the evaluation effort in accordance with the priorities assigned to the needed information
! 9-10 Excellent ! 7-8 Very Good ! 5-6 Good ! 3-4 Fair ! 0-2 Poor
U4 Values Identification
! Consider alternative sources of values for interpreting evaluation findings
! Provide a clear, defensible basis for value judgments
! Determine the appropriate party(s) to make the valuational interpretations
! Identify pertinent societal needs
! Identify pertinent customer needs
! Reference pertinent laws
! Reference, as appropriate, the relevant institutional mission
! Reference the program=s goals
! Take into account the stakeholders= values
! As appropriate, present alternative interpretations based on conflicting but credible value bases
! 9-10 Excellent ! 7-8 Very Good ! 5-6 Good ! 3-4 Fair ! 0-2 Poor
U5 Report Clarity
! Clearly report the essential information
! Issue brief, simple, and direct reports
! Focus reports on contracted questions
! Describe the program and its context
! Describe the evaluation=s purposes, procedures, and findings
! Support conclusions and recommendations
! Avoid reporting technical jargon
! Report in the language(s) of stakeholders
! Provide an executive summary
! Provide a technical report
! 9-10 Excellent ! 7-8 Very Good ! 5-6 Good ! 3-4 Fair ! 0-2 Poor
U6 Report Timeliness and Dissemination
! Make timely interim reports to intended users
! Deliver the final report when it is needed
! Have timely exchanges with the program=s policy board
! Have timely exchanges with the program=s staff
! Have timely exchanges with the program=s customers
! Have timely exchanges with the public media
72. Program Evaluations Metaevaluation Checklist 3
! Have timely exchanges with the full range of right-to-know audiences
! Employ effective media for reaching and informing the different audiences
! Keep the presentations appropriately brief
! Use examples to help audiences relate the findings to practical situations
! 9-10 Excellent ! 7-8 Very Good ! 5-6 Good ! 3-4 Fair ! 0-2 Poor
U7 Evaluation Impact
! Maintain contact with audience
! Involve stakeholders throughout the evaluation
! Encourage and support stakeholders= use of the findings
! Show stakeholders how they might use the findings in their work
! Forecast and address potential uses of findings
! Provide interim reports
! Make sure that reports are open, frank, and concrete
! Supplement written reports with ongoing oral communication
! Conduct feedback workshops to go over and apply findings
! Make arrangements to provide follow-up assistance in interpreting and applying the findings
! 9-10 Excellent ! 7-8 Very Good ! 5-6 Good ! 3-4 Fair ! 0-2 Poor
Scoring the Evaluation for UTILITY
Add the following:
Strength of the evaluation’s provisions for
UTILITY:
Number of Excellent ratings (0-7) x 4 = ! 26 (93%) to 28: Excellent
Number of Very Good (0-7) x 3 = ! 19 (68%) to 25: Very Good
Number of Good (0-7) x 2 = ! 14 (50%) to 18: Good
Number of Fair (0-7) x 1 = ! 7 (25%) to 13: Fair
Total score: = ! 0 (0%) to 5: Poor
_____ (Total score) ÷28 = _____ x 100 = _____
TO MEET THE REQUIREMENTS FOR FEASIBILITY, PROGRAM EVALUATIONS SHOULD:
F1 Practical Procedures
! Tailor methods and instruments to information requirements
! Minimize disruption
! Minimize the data burden
! Appoint competent staff
! Train staff
! Choose procedures that the staff are qualified to carry out
! Choose procedures in light of known constraints
! Make a realistic schedule
! Engage locals to help conduct the evaluation
! As appropriate, make evaluation procedures a part of routine events
! 9-10 Excellent ! 7-8 Very Good ! 5-6 Good ! 3-4 Fair ! 0-2 Poor
73. Program Evaluations Metaevaluation Checklist 4
F2 Political Viability
! Anticipate different positions of different interest groups
! Avert or counteract attempts to bias or misapply the findings
! Foster cooperation
! Involve stakeholders throughout the evaluation
! Agree on editorial and dissemination authority
! Issue interim reports
! Report divergent views
! Report to right-to-know audiences
! Employ a firm public contract
! Terminate any corrupted evaluation
! 9-10 Excellent ! 7-8 Very Good ! 5-6 Good ! 3-4 Fair ! 0-2 Poor
F3 Cost Effectiveness
! Be efficient
! Make use of in-kind services
! Produce information worth the investment
! Inform decisions
! Foster program improvement
! Provide accountability information
! Generate new insights
! Help spread effective practices
! Minimize disruptions
! Minimize time demands on program personnel
! 9-10 Excellent ! 7-8 Very Good ! 5-6 Good ! 3-4 Fair ! 0-2 Poor
Scoring the Evaluation for FEASIBILITY
Add the following:
Strength of the evaluation’s provisions for
FEASIBILITY
Number of Excellent ratings 0-3) x 4 = ! 11 (93%) to 12: Excellent
Number of Very Good (0-3) x 3 = ! 8 (68%) to 10: Very Good
Number of Good (0-3) x 2 = ! 6 (50%) to 7: Good
Number of Fair (0-3 x 1 = ! 3 (25%) to 5: Fair
Total score: = ! 0 (0%) to 2: Poor
_____ (Total score) ÷ 12 = _____ x 100 =
TO MEET THE REQUIREMENTS FOR PROPRIETY, PROGRAM EVALUATIONS SHOULD:
P1 Service Orientation
! Assess needs of the program=s customers
! Assess program outcomes against targeted customers= assessed needs
! Help assure that the full range of rightful program beneficiaries are served
! Promote excellent service
! Make the evaluation=s service orientation clear to stakeholders
! Identify program strengths to build on
! Identify program weaknesses to correct
74. Program Evaluations Metaevaluation Checklist 5
! Give interim feedback for program improvement
! Expose harmful practices
! Inform all right-to-know audiences of the program=s positive and negative outcomes
! 9-10 Excellent ! 7-8 Very Good ! 5-6 Good ! 3-4 Fair ! 0-2 Poor
P2 Formal Agreements, reach advance written agreements on:
! Evaluation purpose and questions
! Audiences
! Evaluation reports
! Editing
! Release of reports
! Evaluation procedures and schedule
! Confidentiality/anonymity of data
! Evaluation staff
! Metaevaluation
! Evaluation resources
! 9-10 Excellent ! 7-8 Very Good ! 5-6 Good ! 3-4 Fair ! 0-2 Poor
P3 Rights of Human Subjects
! Make clear to stakeholders that the evaluation will respect and protect the rights of human subjects
! Clarify intended uses of the evaluation
! Keep stakeholders informed
! Follow due process
! Uphold civil rights
! Understand participant values
! Respect diversity
! Follow protocol
! Honor confidentiality/anonymity agreements
! Do no harm
! 9-10 Excellent ! 7-8 Very Good ! 5-6 Good ! 3-4 Fair ! 0-2 Poor
P4 Human Interactions
! Consistently relate to all stakeholders in a professional manner
! Maintain effective communication with stakeholders
! Follow the institution=s protocol
! Minimize disruption
! Honor participants= privacy rights
! Honor time commitments
! Be alert to and address participants= concerns about the evaluation
! Be sensitive to participants= diversity of values and cultural differences
! Be even-handed in addressing different stakeholders
! Do not ignore or help cover up any participant=s incompetence, unethical behavior, fraud, waste, or abuse
! 9-10 Excellent ! 7-8 Very Good ! 5-6 Good ! 3-4 Fair ! 0-2 Poor