The document provides guidance on how to hire the right evaluator for a tobacco prevention and control program evaluation. It outlines four key guidelines: 1) Define the evaluation focus, activities, and develop a request for proposals; 2) Review proposals based on understanding of the evaluation/program, proposed questions/methodology, timeline, and budget; 3) Interview top candidates and check references and previous work; 4) Finalize a contract with the selected evaluator. The document emphasizes the importance of a well-planned evaluation in improving tobacco prevention programs and identifies criteria for selecting an evaluator that can meet the specific needs and goals of the evaluation.
This document provides a summary of a course on risk management. It outlines the course objectives, expected outcomes, skills developed, required materials, instructional methods, schedule, assessment criteria, resources, and instructor contact information. The course objectives focus on planning, identification, analysis, responses, monitoring and control of risks on a project. It will be taught through lectures, demonstrations, discussions, and projects. Assessment will include weekly assignments, projects, quizzes, and a final exam. The instructor can be contacted by email or during posted office hours.
1. Monitoring and evaluation are important functions of management that allow leaders to assess results, improve processes, promote learning, and ensure accountability.
2. Key aspects of monitoring include systematically collecting and analyzing information over time to identify changes and measure progress against plans. Evaluation analyzes the effectiveness and direction of activities by making judgments about impact and progress.
3. Monitoring and evaluation are important school leadership tools to assess whether objectives are being met, adapt plans as needed, identify lessons learned, understand stakeholder perspectives, and ensure efficient and appropriate use of resources.
Almm monitoring and evaluation tools draft[1]acmAlberto Mico
This document outlines monitoring and evaluation tools and processes for employment programs. It defines monitoring as highlighting strengths and weaknesses to enable program improvements. Evaluation determines program success and explains outcomes. Key aspects of monitoring include gathering inputs, progress, results, impacts and management data. Evaluation assesses achievement of objectives and recommends improvements. Both quantitative and qualitative data should be analyzed to fully understand outcomes and processes. Regular interim and final evaluations are important to inform future programs.
Pmbok 5th planning process group part four _ Project Risk ManagementHossam Maghrabi
This is PMBOK Guide Planning Process Group Part Four. It includes one Knowledge Area - Project Risk Management - with five processes - Plan Risk Management, Identify Risks, Perform Qualitative Risk Analysis, Perform Quantitative Risk Analysis, Plan Risk Responses -.
A logical framework, or logframe, is a tool used to plan and manage projects. It presents the key components of a project in a matrix format including goals, objectives, activities, outputs, outcomes, and impacts. It also defines indicators for measuring success, identifies assumptions that could impact the project, and outlines risks. Developing a logical framework helps ensure a common understanding of the project among stakeholders and establishes a basis for evaluation and learning.
The Planning Process Group involves defining the strategy and tactics to successfully complete a project. It includes processes like developing the project management plan, collecting requirements, defining the scope, and creating the work breakdown structure (WBS). The key outputs are the project management plan, requirements documentation, scope statement, and WBS. These outputs establish the total scope of work and provide a framework for planning, executing, and controlling the project.
1. The document discusses instructional leadership and monitoring and evaluation in education. It provides information on defining monitoring and evaluation, the purpose and importance of M&E, and the key steps in conducting monitoring and evaluation.
2.
This document provides a summary of a course on risk management. It outlines the course objectives, expected outcomes, skills developed, required materials, instructional methods, schedule, assessment criteria, resources, and instructor contact information. The course objectives focus on planning, identification, analysis, responses, monitoring and control of risks on a project. It will be taught through lectures, demonstrations, discussions, and projects. Assessment will include weekly assignments, projects, quizzes, and a final exam. The instructor can be contacted by email or during posted office hours.
1. Monitoring and evaluation are important functions of management that allow leaders to assess results, improve processes, promote learning, and ensure accountability.
2. Key aspects of monitoring include systematically collecting and analyzing information over time to identify changes and measure progress against plans. Evaluation analyzes the effectiveness and direction of activities by making judgments about impact and progress.
3. Monitoring and evaluation are important school leadership tools to assess whether objectives are being met, adapt plans as needed, identify lessons learned, understand stakeholder perspectives, and ensure efficient and appropriate use of resources.
Almm monitoring and evaluation tools draft[1]acmAlberto Mico
This document outlines monitoring and evaluation tools and processes for employment programs. It defines monitoring as highlighting strengths and weaknesses to enable program improvements. Evaluation determines program success and explains outcomes. Key aspects of monitoring include gathering inputs, progress, results, impacts and management data. Evaluation assesses achievement of objectives and recommends improvements. Both quantitative and qualitative data should be analyzed to fully understand outcomes and processes. Regular interim and final evaluations are important to inform future programs.
Pmbok 5th planning process group part four _ Project Risk ManagementHossam Maghrabi
This is PMBOK Guide Planning Process Group Part Four. It includes one Knowledge Area - Project Risk Management - with five processes - Plan Risk Management, Identify Risks, Perform Qualitative Risk Analysis, Perform Quantitative Risk Analysis, Plan Risk Responses -.
A logical framework, or logframe, is a tool used to plan and manage projects. It presents the key components of a project in a matrix format including goals, objectives, activities, outputs, outcomes, and impacts. It also defines indicators for measuring success, identifies assumptions that could impact the project, and outlines risks. Developing a logical framework helps ensure a common understanding of the project among stakeholders and establishes a basis for evaluation and learning.
The Planning Process Group involves defining the strategy and tactics to successfully complete a project. It includes processes like developing the project management plan, collecting requirements, defining the scope, and creating the work breakdown structure (WBS). The key outputs are the project management plan, requirements documentation, scope statement, and WBS. These outputs establish the total scope of work and provide a framework for planning, executing, and controlling the project.
1. The document discusses instructional leadership and monitoring and evaluation in education. It provides information on defining monitoring and evaluation, the purpose and importance of M&E, and the key steps in conducting monitoring and evaluation.
2.
According to Project Management Institute (PMI), the Initiating Process Group is the first step to complete the five PMBOK's Project Management Process Groups. The Initiating Process Group consists of (Developing a Project Charter & Identify Stakeholders) those processes performed to define a new project or a new phase of an existing project by obtaining authorization to start the project or phase.
The document provides details about a workshop on introducing the Logical Framework Approach (LFA) as a project planning tool. It includes an agenda with sections on introducing participants, an overview of LFA including its history and benefits, components of project planning, and an example of applying LFA to modernize settlement services. The workshop aims to help participants understand LFA and use it as an analytic framework for participatory project planning, assessment, and evaluation.
The document provides details about establishing a monitoring and evaluation (M&E) system for a water resources development project in Bangladesh. It discusses conducting a readiness assessment, agreeing on objectives and outcomes, selecting indicators and targets, and planning monitoring, data collection, analysis, and reporting. Key steps include establishing the M&E purpose, conducting a baseline study, developing an evaluation framework, and ensuring the necessary conditions and capacities are in place to support effective M&E.
This document provides information about project monitoring, controlling, and auditing. It introduces the members of Group 10 and defines monitoring as collecting, recording, and reporting on time, cost, and performance. It discusses the purpose and benefits of monitoring, as well as who serves as monitors. The document also covers project controlling, areas of control, and the construction and contents of audit reports. Finally, it defines public-private partnerships and their objectives and characteristics.
WQD2011 - Breakthrough Process Improvement - Tawam Hospital - The Surgical Ad...Dubai Quality Group
Breakthrough Process Improvement case study submitted by Tawam Hospital during 3rd Continual Improvement & Innovation Symposium organized by Dubai Quality Group's Continual Improvement Subgroup to celebrate World Quality Day 2011.
Monitoring involves systematically collecting and analyzing data during project implementation to inform decision making, ensure activities are on track, and identify any needed corrections. Evaluation assesses projects after completion to determine relevance, effectiveness, efficiency, sustainability, and impact. Both processes provide information for accountability and learning, with monitoring focusing on operational performance and evaluation making judgments about overall achievement of objectives.
The document discusses various topics related to software development including challenges, opportunities, and best practices. Some key points:
1. Software development involves many phases from requirements analysis to testing to maintenance. It is important to involve users, establish clear standards, and divide projects into well-defined phases and activities.
2. Developing software presents inherent challenges like changing requirements, managing frequent changes, and ensuring compatibility with existing systems. Adopting a problem-solving approach and justifying systems as investments can help address some challenges.
3. Opportunities in software development include making the process more cost-effective, improving quality, and capturing important domain knowledge. New approaches like model-driven development also offer benefits if properly
The document provides an overview of logical framework analysis (LFA), which is a management tool used to design, monitor, and evaluate development projects. It explains the key components of an LFA, including: (1) analyzing problems and objectives, (2) clustering related objectives, (3) scoping the project, (4) determining intervention logic and external factors, and (5) developing a project planning matrix with objectives, indicators, means of verification, and assumptions. The LFA process helps improve project quality by ensuring objectives are clearly defined, stakeholders are involved, and assumptions and risks are explicitly stated.
The document discusses project risk management. It describes the processes of:
1. Planning risk management - Deciding how to approach and plan risk management activities.
2. Identifying risks - Determining risks that could affect the project.
3. Analyzing risks - Prioritizing risks and assessing their impact and probability.
4. Planning risk responses - Developing options to reduce threats and enhance opportunities.
5. Controlling risks - Implementing risk response plans and monitoring risks.
This document provides guidance on monitoring and evaluation (M&E) for organizations. It discusses the importance of M&E and key concepts like indicators, results chains, and identifying evidence of change. The document emphasizes that M&E requires organizational and technical readiness, including clear frameworks, evidence-based planning, relevant skills, and experience. It also provides examples of performance measures and developing them for different sectors. Worksheets are included to help participants apply these M&E concepts.
This document provides an overview of monitoring and evaluation for projects and organizations. It discusses what monitoring and evaluation are, how they are different but related processes, and why they are important. Monitoring involves systematically collecting and analyzing information during a project to improve efficiency and effectiveness. Evaluation compares actual project impacts to strategic plans and examines accomplishments and methods. Both aim to support learning, focus on efficiency, effectiveness and impact, and are best done with proper prior planning. The document outlines key aspects of planning, designing and implementing effective monitoring and evaluation systems.
Monitoring and evaluation of human rights projectsInka Pibilova
This document provides information about monitoring and evaluating human rights projects. It discusses the differences between evaluation, monitoring, and auditing. Evaluation assesses project effectiveness, impact, and sustainability for learning and accountability. Monitoring analyzes ongoing project progress toward planned results to improve management. The document also outlines why monitoring and evaluation are important, the project cycle, stakeholder analysis, problem trees, objectives, indicators, risks analysis, and tools for monitoring and evaluating projects.
This document provides an overview of project evaluation. It discusses why evaluations are conducted (to provide feedback and determine if objectives were met), who is typically involved in the evaluation process (e.g. project directors, evaluators), and how to design and carry out an evaluation (e.g. creating an evaluation plan with goals, indicators, and benchmarks). The document also covers common evaluation methods like interviews, surveys, documentation review and observations. Overall, the document aims to demystify the evaluation process and provide guidance on the key questions of why, who, where, when and how to approach project evaluation.
Measurement of project quality performanceK.K. PATHAK
The document discusses measuring project quality performance for the construction industry. It outlines the need to measure quality, identifies measurable quality parameters, and describes how to score and evaluate projects based on these parameters. Key points include:
- Measuring quality allows understanding of standards, trends, strengths/weaknesses to enhance quality.
- Parameters include implementation of quality plans, product quality, record availability, open issues, repair costs, amounts withheld.
- Projects are scored and graded (A,B,C) based on criteria for each parameter, with weights assigned.
- A sample score sheet and quality trends report are shown as examples of the measurement system.
The document discusses project risk management processes including:
1) Planning risk management to define the approach and ensure sufficient resources.
2) Identifying risks through various techniques like brainstorming and checklists.
3) Analyzing risks qualitatively by assessing probability and impact, and quantitatively using tools like decision trees.
4) Developing responses like mitigation plans, contingency plans and fallbacks to enhance opportunities and reduce threats.
5) Monitoring and controlling risks, residual risks, and the effectiveness of the risk management process.
This document outlines the logical framework for a project, including its overall objectives, specific objectives, expected results, activities, and indicators for measuring achievement. The project aims to contribute to broader development goals through a specific objective. Key activities will be carried out to produce expected results and work towards the specific objective. Objectively verifiable indicators, sources of verification, and external assumptions are identified to monitor progress.
This document provides an overview of project risk management. It defines risk and discusses key concepts like risk appetite, tolerance, and threshold. It also categorizes examples of risks as external, internal, technical, and management-related. The chapter outlines the process for planning risk management, including inputs like the project management plan, charter, and stakeholder register. Tools and techniques for planning risk management include analytical methods and expert judgment. The main output is a risk management plan that defines the methodology, roles, budget, risk categories, and risk matrix to be used to manage project risks.
This document discusses ensuring fidelity when implementing programs. It defines fidelity as the degree to which programs are implemented as intended. It identifies several key aspects of fidelity including adherence (using appropriate materials and delivering to the right population), exposure (frequency and duration), quality of delivery, participation and responsiveness, and program differentiation. The document also provides resources for measuring fidelity such as observation tools and program-specific fidelity instruments.
Description
Next Generation Science Standards identifies the science all K-12 students should know. These new standards are based on the National Research Council's A Framework for K-12 Science Education. The National Research Council, the National Science Teachers Association, the American Association for the Advancement of Science, and Achieve have partnered to create standards through a collaborative state-led process. The standards are rich in content and practice and arranged in a coherent manner across disciplines and grades to provide all students an internationally benchmarked science education.
According to Project Management Institute (PMI), the Initiating Process Group is the first step to complete the five PMBOK's Project Management Process Groups. The Initiating Process Group consists of (Developing a Project Charter & Identify Stakeholders) those processes performed to define a new project or a new phase of an existing project by obtaining authorization to start the project or phase.
The document provides details about a workshop on introducing the Logical Framework Approach (LFA) as a project planning tool. It includes an agenda with sections on introducing participants, an overview of LFA including its history and benefits, components of project planning, and an example of applying LFA to modernize settlement services. The workshop aims to help participants understand LFA and use it as an analytic framework for participatory project planning, assessment, and evaluation.
The document provides details about establishing a monitoring and evaluation (M&E) system for a water resources development project in Bangladesh. It discusses conducting a readiness assessment, agreeing on objectives and outcomes, selecting indicators and targets, and planning monitoring, data collection, analysis, and reporting. Key steps include establishing the M&E purpose, conducting a baseline study, developing an evaluation framework, and ensuring the necessary conditions and capacities are in place to support effective M&E.
This document provides information about project monitoring, controlling, and auditing. It introduces the members of Group 10 and defines monitoring as collecting, recording, and reporting on time, cost, and performance. It discusses the purpose and benefits of monitoring, as well as who serves as monitors. The document also covers project controlling, areas of control, and the construction and contents of audit reports. Finally, it defines public-private partnerships and their objectives and characteristics.
WQD2011 - Breakthrough Process Improvement - Tawam Hospital - The Surgical Ad...Dubai Quality Group
Breakthrough Process Improvement case study submitted by Tawam Hospital during 3rd Continual Improvement & Innovation Symposium organized by Dubai Quality Group's Continual Improvement Subgroup to celebrate World Quality Day 2011.
Monitoring involves systematically collecting and analyzing data during project implementation to inform decision making, ensure activities are on track, and identify any needed corrections. Evaluation assesses projects after completion to determine relevance, effectiveness, efficiency, sustainability, and impact. Both processes provide information for accountability and learning, with monitoring focusing on operational performance and evaluation making judgments about overall achievement of objectives.
The document discusses various topics related to software development including challenges, opportunities, and best practices. Some key points:
1. Software development involves many phases from requirements analysis to testing to maintenance. It is important to involve users, establish clear standards, and divide projects into well-defined phases and activities.
2. Developing software presents inherent challenges like changing requirements, managing frequent changes, and ensuring compatibility with existing systems. Adopting a problem-solving approach and justifying systems as investments can help address some challenges.
3. Opportunities in software development include making the process more cost-effective, improving quality, and capturing important domain knowledge. New approaches like model-driven development also offer benefits if properly
The document provides an overview of logical framework analysis (LFA), which is a management tool used to design, monitor, and evaluate development projects. It explains the key components of an LFA, including: (1) analyzing problems and objectives, (2) clustering related objectives, (3) scoping the project, (4) determining intervention logic and external factors, and (5) developing a project planning matrix with objectives, indicators, means of verification, and assumptions. The LFA process helps improve project quality by ensuring objectives are clearly defined, stakeholders are involved, and assumptions and risks are explicitly stated.
The document discusses project risk management. It describes the processes of:
1. Planning risk management - Deciding how to approach and plan risk management activities.
2. Identifying risks - Determining risks that could affect the project.
3. Analyzing risks - Prioritizing risks and assessing their impact and probability.
4. Planning risk responses - Developing options to reduce threats and enhance opportunities.
5. Controlling risks - Implementing risk response plans and monitoring risks.
This document provides guidance on monitoring and evaluation (M&E) for organizations. It discusses the importance of M&E and key concepts like indicators, results chains, and identifying evidence of change. The document emphasizes that M&E requires organizational and technical readiness, including clear frameworks, evidence-based planning, relevant skills, and experience. It also provides examples of performance measures and developing them for different sectors. Worksheets are included to help participants apply these M&E concepts.
This document provides an overview of monitoring and evaluation for projects and organizations. It discusses what monitoring and evaluation are, how they are different but related processes, and why they are important. Monitoring involves systematically collecting and analyzing information during a project to improve efficiency and effectiveness. Evaluation compares actual project impacts to strategic plans and examines accomplishments and methods. Both aim to support learning, focus on efficiency, effectiveness and impact, and are best done with proper prior planning. The document outlines key aspects of planning, designing and implementing effective monitoring and evaluation systems.
Monitoring and evaluation of human rights projectsInka Pibilova
This document provides information about monitoring and evaluating human rights projects. It discusses the differences between evaluation, monitoring, and auditing. Evaluation assesses project effectiveness, impact, and sustainability for learning and accountability. Monitoring analyzes ongoing project progress toward planned results to improve management. The document also outlines why monitoring and evaluation are important, the project cycle, stakeholder analysis, problem trees, objectives, indicators, risks analysis, and tools for monitoring and evaluating projects.
This document provides an overview of project evaluation. It discusses why evaluations are conducted (to provide feedback and determine if objectives were met), who is typically involved in the evaluation process (e.g. project directors, evaluators), and how to design and carry out an evaluation (e.g. creating an evaluation plan with goals, indicators, and benchmarks). The document also covers common evaluation methods like interviews, surveys, documentation review and observations. Overall, the document aims to demystify the evaluation process and provide guidance on the key questions of why, who, where, when and how to approach project evaluation.
Measurement of project quality performanceK.K. PATHAK
The document discusses measuring project quality performance for the construction industry. It outlines the need to measure quality, identifies measurable quality parameters, and describes how to score and evaluate projects based on these parameters. Key points include:
- Measuring quality allows understanding of standards, trends, strengths/weaknesses to enhance quality.
- Parameters include implementation of quality plans, product quality, record availability, open issues, repair costs, amounts withheld.
- Projects are scored and graded (A,B,C) based on criteria for each parameter, with weights assigned.
- A sample score sheet and quality trends report are shown as examples of the measurement system.
The document discusses project risk management processes including:
1) Planning risk management to define the approach and ensure sufficient resources.
2) Identifying risks through various techniques like brainstorming and checklists.
3) Analyzing risks qualitatively by assessing probability and impact, and quantitatively using tools like decision trees.
4) Developing responses like mitigation plans, contingency plans and fallbacks to enhance opportunities and reduce threats.
5) Monitoring and controlling risks, residual risks, and the effectiveness of the risk management process.
This document outlines the logical framework for a project, including its overall objectives, specific objectives, expected results, activities, and indicators for measuring achievement. The project aims to contribute to broader development goals through a specific objective. Key activities will be carried out to produce expected results and work towards the specific objective. Objectively verifiable indicators, sources of verification, and external assumptions are identified to monitor progress.
This document provides an overview of project risk management. It defines risk and discusses key concepts like risk appetite, tolerance, and threshold. It also categorizes examples of risks as external, internal, technical, and management-related. The chapter outlines the process for planning risk management, including inputs like the project management plan, charter, and stakeholder register. Tools and techniques for planning risk management include analytical methods and expert judgment. The main output is a risk management plan that defines the methodology, roles, budget, risk categories, and risk matrix to be used to manage project risks.
This document discusses ensuring fidelity when implementing programs. It defines fidelity as the degree to which programs are implemented as intended. It identifies several key aspects of fidelity including adherence (using appropriate materials and delivering to the right population), exposure (frequency and duration), quality of delivery, participation and responsiveness, and program differentiation. The document also provides resources for measuring fidelity such as observation tools and program-specific fidelity instruments.
Description
Next Generation Science Standards identifies the science all K-12 students should know. These new standards are based on the National Research Council's A Framework for K-12 Science Education. The National Research Council, the National Science Teachers Association, the American Association for the Advancement of Science, and Achieve have partnered to create standards through a collaborative state-led process. The standards are rich in content and practice and arranged in a coherent manner across disciplines and grades to provide all students an internationally benchmarked science education.
The New Mexico Human Services Department's strategic plan for fiscal year 2011 outlines goals to: 1) insure more New Mexicans through expanding access to affordable health coverage options; 2) improve health outcomes and family support through initiatives like school-based health services; and 3) combat hunger and improve nutrition by reducing hunger among children through programs like SNAP and school meal programs. Performance measures and targets are established to track progress towards these goals.
The economic cost of drug, alcohol and tobacco abuse in the United States is more than $500 billion per year. Effective substance abuse prevention can yield major economic savings, with every $1 invested in prevention saving between $10-20 in treatment costs. Research shows preventive interventions are most economically beneficial when the condition is prevalent and costly, effective interventions exist, and intervention costs are low.
The document discusses underage drinking in New Mexico, which has the highest percentage of youth drinking before age 13 at 34.1% in Santa Fe County compared to the national average of 21.1%. It emphasizes the importance of primary prevention strategies, as they are the least costly approach. The core of prevention lies within the six strategies of the Center for Substance Abuse Prevention, including information dissemination, prevention education, and alternative activities. Several community resources for prevention in New Mexico are then listed.
This report summarizes youth risk and resiliency survey data from 31 New Mexico counties on alcohol use among middle and high school students between 2009-2015. The data shows declines over time in ever drinking alcohol, binge drinking, drinking before age 11 and current alcohol use among students. The report highlights that local prevention programs have been successful in reducing substance abuse and building community capacity.
This document discusses the current state and future needs of behavioral health prevention. It makes three key points:
1) The field of prevention has advanced scientifically but funding and workforce development have not kept pace, leaving the current prevention workforce ill-prepared. A bachelor's degree and prevention-specific training are needed.
2) A coordinated, evidence-based statewide prevention system is needed to eliminate silos and political agendas. Local needs can still be met through ongoing community assessments.
3) Prevention must integrate across systems and sectors through partnerships. The prevention workforce, leadership, and dissemination of research must expand to engage all parts of the community, including businesses and organizations. Coordination is needed across federal, state and
Prevention programs should address risk and protective factors, tailor interventions to the specific risks and population, and enhance protective factors like family bonding. Effective programs provide parenting skills, teach families how to develop and enforce drug policies, and give parents drug education to discuss with their children. School-based programs can intervene early to address risk factors for drug abuse.
Inter-professional Education for Collaboration:
Learning How to
Improve Health from Inter-professional Models Across the
Continuum of Education to Practice
The document provides an overview of the IFRC Framework for Evaluation, which guides how evaluations are planned, managed, conducted, and utilized by the IFRC Secretariat. The framework promotes reliable, useful, and ethical evaluations to contribute to organizational learning, accountability, and the IFRC's mission. It outlines key parts of the framework, including evaluation criteria to guide what is evaluated and standards and processes to guide how evaluations are conducted. The framework is intended to guide those involved in evaluations and inform stakeholders about expected practices.
This presentation has a vivid description of the basics of doing a program evaluation, with detailed explanation of the " Log Frame work " ( LFA) with practical example from the CLICS project. This presentation also includes the CDC framework for evaluation of program.
N.B: Kindly open the ppt in slide share mode to fully use all the animations wheresoever made.
This document discusses monitoring and evaluation (M&E) of projects and programs. It defines monitoring as the regular collection and analysis of information to track changes over time, while evaluation analyzes effectiveness, direction, and impact of an activity. The main differences are timing (monitoring is ongoing, evaluation is periodic) and questions asked (monitoring checks progress, evaluation assesses outcomes and impact). M&E helps assess results, improve management, promote learning, and ensure accountability. Key criteria for evaluating development assistance include relevance, efficiency, effectiveness, impact, and sustainability.
Before conducting a program evaluation, an organization should ensure its board is functioning properly and it has established programs in place. A program evaluation involves examining an program's inputs, processes, outputs, and outcomes. The scope and depth of a program evaluation depends on the information needs and available resources. Key considerations in designing an evaluation include determining its purpose, intended audiences, needed information, data collection sources and methods, timeline, and available resources.
This document discusses planning, monitoring, and evaluating health services. It defines monitoring and evaluation as key functions to improve performance and determine whether programs are achieving their goals. Monitoring involves systematic observation of activities, while evaluation assesses achievement against criteria. Both use indicators and data collection to analyze inputs, processes, outputs, outcomes, and impacts. Evaluation can be conducted internally or externally. The evaluation process involves planning, method selection, data collection and analysis, reporting, and dissemination. Both qualitative and quantitative methods are used. The goal is to improve programs and determine their effectiveness, efficiency, and relevance in improving health.
This document discusses management-oriented evaluation approaches. It begins by stating that these approaches aim to serve decision makers by providing evaluation information to help with good decision making. It describes the CIPP model created by Stuffbeam which evaluates programs based on Context, Input, Process, and Product. The document also discusses other early evaluation models like the UCLA model. It notes strengths of the management approach include focusing evaluations and linking them to decision making. Potential limitations include the evaluator becoming too aligned with management or evaluations becoming too complex.
The document provides guidance on conducting evaluations. It defines evaluation as the assessment of a project's level of achievement and impact. Evaluations are important for accountability, transparency, and learning. They focus on determining what worked and did not work in an intervention. The document outlines the key steps in managing an evaluation including developing an evaluation plan, collecting data, and reporting findings and lessons learned.
Independent peer review helps assure quality, value, and objectivity for research projects and programs. Peer reviews can evaluate proposals, assess interim progress, and review results. Effective peer reviews involve subject area experts providing independent perspectives to inform funding decisions. Maintaining integrity through an unbiased, transparent process is key. Various types of peer reviews exist, from electronic reviews of proposals to multi-step reviews of large programs. Proper planning, identification of qualified reviewers while avoiding conflicts of interest, effective conduct and information management, and documentation of lessons learned can help ensure a successful peer review.
The document discusses evaluation of health programs. It defines evaluation as the systematic acquisition and assessment of information to provide useful feedback. The main goals of evaluation are to influence decision-making and policy formulation through empirically-driven feedback. Formative evaluation assesses needs and implementation, while summative evaluation determines outcomes, impacts, costs and benefits. Evaluation questions, methods, and frameworks are described to establish program merit, worth and significance based on credible evidence from stakeholders. Standards ensure evaluations are useful, feasible, proper and accurate.
The document discusses the key steps in developing an effective evaluation plan, including identifying stakeholders, budgeting, establishing procedures, collecting and analyzing data, reporting results, and using the evaluation to improve programs. An evaluation plan should identify evaluation questions, data needs, collection methods, timelines, and staffing. It should produce a report that clearly presents changes, their causes, costs, and recommendations for strengthening the program. Avoiding pitfalls like attribution errors and using all collected data helps ensure an useful evaluation.
The OECD Due Diligence Guidance for Responsible Supply Chains in the Garment and Footwear Sector is used as the benchmark for due diligence by many industry and multi-stakeholder initiatives. The Alignment Assessment Tool serves to evaluate the alignment of the standards and implementation of these initiatives with the recommendations in the OECD Due Diligence Guidance. To find out more, visit https://mneguidelines.oecd.org/Alignment-assessment-due-diligence-garment-footwear.htm
PJM6125 Project Evaluation: Selecting Evaluation Tools
Overview and Rationale
For this assignment, you will be selecting evaluation tools and adding those selected to the
Evaluation Goal Matrix that you developed as part of the previous assignment.
Program and Course Outcomes
This assignment is directly linked to the following key learning outcomes from the course
syllabus:
LO3: Analyze and apply appropriate evaluation tool
L07: Plan and conduct a tactical evaluation using both qualitative and quantitative
measures
In addition to these key learning outcomes, you will also have the opportunity to evidence
the following skills through completing this assignment:
Critical thinking
Problem solving
Essential Components & Instructions
Using the project you identified in Lesson 1 and the stakeholder analysis and performance
metrics you identified as part of Lesson 2, you will be selecting evaluation tools and adding
them to your Evaluation Goal Matrix from Lesson 2.
Begin by updating your Evaluation Goal Matrix with any feedback provided, and then add
an additional column titled 'Evaluation Tool' and select an evaluation tool for each of the
metrics you identified during Lesson 2. Therefore, you will identify a minimum of one
evaluation tool for each of the entries in your evaluation goal matrix.
Once you identify the tools that you will use, write a few paragraphs on how each tool will
be used, why the tool was selected, who will be responsible for performing the evaluation
with the tool and how the data will be used and will help support the success of the project.
These entries can be made below your updated evaluation goal matrix. You should provide
a thorough evaluation and explanation of each tool you list in your updated evaluation goal
matrix. You may wish to use materials from the lesson, readings, and external sources in
writing the explanations. However, be sure to cite any sources that you use in writing the
explanations.
Format
Below are some key guidelines you will want to ensure you follow. Think of this short list
as a quality control checklist, along with the attached grading rubric.
Be sure you have identified at least one tool per goal in your matrix
You may use a tool to assess multiple goals if it is appropriate; if you do this, make sure
in your explanation that you provide sufficient detail to address all the goals the tool
addresses
You should submit an updated Evaluation Goal Matrix and the narrative descriptive and
explanation of each tool in a single file (MsWord or .pdf)
You should include a cover page
You should provide a brief abstract about the process you went through to develop your
two tables.
You should provide a project summary of your project
You should format the documents professionally
The tables should be readable without having to zoom in on small text
Rubric(s)
Asse.
PJM6125 Project Evaluation: Selecting Evaluation Tools
Overview and Rationale
For this assignment, you will be selecting evaluation tools and adding those selected to the
Evaluation Goal Matrix that you developed as part of the previous assignment.
Program and Course Outcomes
This assignment is directly linked to the following key learning outcomes from the course
syllabus:
LO3: Analyze and apply appropriate evaluation tool
L07: Plan and conduct a tactical evaluation using both qualitative and quantitative
measures
In addition to these key learning outcomes, you will also have the opportunity to evidence
the following skills through completing this assignment:
Critical thinking
Problem solving
Essential Components & Instructions
Using the project you identified in Lesson 1 and the stakeholder analysis and performance
metrics you identified as part of Lesson 2, you will be selecting evaluation tools and adding
them to your Evaluation Goal Matrix from Lesson 2.
Begin by updating your Evaluation Goal Matrix with any feedback provided, and then add
an additional column titled 'Evaluation Tool' and select an evaluation tool for each of the
metrics you identified during Lesson 2. Therefore, you will identify a minimum of one
evaluation tool for each of the entries in your evaluation goal matrix.
Once you identify the tools that you will use, write a few paragraphs on how each tool will
be used, why the tool was selected, who will be responsible for performing the evaluation
with the tool and how the data will be used and will help support the success of the project.
These entries can be made below your updated evaluation goal matrix. You should provide
a thorough evaluation and explanation of each tool you list in your updated evaluation goal
matrix. You may wish to use materials from the lesson, readings, and external sources in
writing the explanations. However, be sure to cite any sources that you use in writing the
explanations.
Format
Below are some key guidelines you will want to ensure you follow. Think of this short list
as a quality control checklist, along with the attached grading rubric.
Be sure you have identified at least one tool per goal in your matrix
You may use a tool to assess multiple goals if it is appropriate; if you do this, make sure
in your explanation that you provide sufficient detail to address all the goals the tool
addresses
You should submit an updated Evaluation Goal Matrix and the narrative descriptive and
explanation of each tool in a single file (MsWord or .pdf)
You should include a cover page
You should provide a brief abstract about the process you went through to develop your
two tables.
You should provide a project summary of your project
You should format the documents professionally
The tables should be readable without having to zoom in on small text
Rubric(s)
Asse ...
This document provides guidance on designing program evaluations in 3-5 sentences. It discusses clarifying the program's goals and strategy, developing relevant evaluation questions, and selecting an appropriate evaluation design and approach. It also covers identifying appropriate data sources and collection procedures, developing plans to analyze data to allow for valid conclusions, and defining key parts of an evaluation plan such as objectives, information sources, data collection methods, and analysis plans.
The document discusses establishing evaluation criteria and methods for mentoring programs. It recommends developing plans to measure both program processes and expected outcomes. For outcomes, programs should specify expected impacts, select instruments to measure them, and implement an evaluation design. The final stage involves refining the program based on findings and disseminating regular reports to stakeholders. Conducting rigorous evaluations requires expertise and can cost $5,000-$10,000 but is important for accountability and demonstrating a program's effectiveness.
This document discusses developing logic models to focus program evaluations. It defines logic models and their components, and provides an example logic model for an education program to prevent HIV infection. Logic models describe the resources, activities, outputs, and short- and long-term outcomes of a program, helping evaluators design focused evaluation questions. The document emphasizes engaging stakeholders in developing the logic model and determining the evaluation's purpose and questions.
Here is an example of how the activity sheet could be completed:
School Readiness Components Current Next level forward
Leadership & Management 2 3 - More distributed leadership seen e.g. HODs taking responsibility for subject improvement plans. SMT spends dedicated time on curriculum rather than admin issues.
Teaching & Learning 1 2 - Evidence of differentiated teaching seen in all classes. Learner support processes strengthened. Assessment used more formatively to inform teaching
Infrastructure & Resources 1 2 - All classes have necessary resources and facilities. Maintenance plan in place. ICT integrated across subjects. Library fully resourced.
School Culture & Climate 2 3 - All learners and teachers feel psychologically safe and
The document outlines DataActiva's approach to program evaluation through 10 tasks:
1) Conduct start-up meetings to discuss the research plan and identify data sources
2) Design surveys for participants, non-participants, and stakeholders
3) Develop a sampling plan to collect necessary information from target groups
4) Collect accurate data from the samples through online/phone/in-person methods
5) Conduct a process evaluation through stakeholder interviews and customer surveys
6) Conduct an impact evaluation combining data sources to assess program effects
7) Reporting will describe methods, results, and provide an assessment of the program
The document summarizes the Be-Above the Influence (BE-ATI) curriculum that was delivered to over 2,400 high school students in Albuquerque Public Schools. It discusses how the curriculum aims to reduce underage drinking by educating youth about the effects of alcohol on the developing brain. It also briefly profiles the partner organization SafeTeen New Mexico, describing it as a youth-driven nonprofit that creates programs on issues like underage drinking, drugs, and dating violence to educate over 5 million people since 2001.
The document summarizes the "Above the Influence" social marketing campaign started over 3 years ago in Bernalillo County, New Mexico. The campaign aims to lower risk factors for youth such as underage drinking, drugs, bullying, and violence. It has over 330 partners including schools, community centers, and businesses. The campaign uses evidence-based strategies and builds on an existing national campaign to encourage youth to pledge to stay "Above the Influence" of alcohol, drugs, bullying, and violence.
This document provides a comprehensive list of evidence-based practice resources for promoting community health, development, and prevention interventions. It includes over 30 links to databases, organizations, and categories of best practices on topics like adolescent pregnancy, cancer, child and youth development, and more. The resources provide systematic reviews, guidelines, and searchable databases of proven community programs and policies.
Albert Einstein indeed stands like a giant amid the pantheon of scientific figures of the twentieth century. His ideas unleashed a revolution whose changes are still being felt into the new century.
This day and age we’re living in Give cause for apprehension With speed and new invention And things like fourth dimension Yet we get a trifle weary With Mr. Einstein’s theory So we must get down to earth at times Relax, relieve the tension And no matter what the progress Or what may yet be proved The simple facts of life are such They cannot be removed You must remember this A kiss is just a kiss A sigh is just a sigh The fundamental things apply As time goes by. . .
The document summarizes a workshop on opportunities to promote children's behavioral health through health care reform and beyond. The 3-day workshop brought together experts from government agencies, foundations, medical organizations, and academia to discuss funding opportunities for evidence-based prevention and interventions. Over 100 participants examined ways to strengthen the children's behavioral health system and ensure access to services. The workshop aimed to inform efforts to improve children's well-being and long-term outcomes.
The majority of teens in Albuquerque do not drink according to a survey by the New Mexico Department of Health. The Mayor's Youth Advisory Council encourages teens to "BE Above the Influence" and not drink by promoting the website www.AboveTheInfluence.com.
FORUM ON INVESTING IN YOUNG
CHILDREN GLOBALLY OVERVIEW
In January 2014, the Board on Children, Youth, and Families of the
Institute of Medicine (IOM) and the National Research Council (NRC), in
collaboration with the IOM Board on Global Health, launched the Forum
on Investing in Young Children Globally (forum). At this meeting, the
participants agreed to focus on creating and sustaining, over 3 years, an
evidence-driven community of stakeholders that aims to explore existing,
new, and innovative science and research from around the world and
translate this evidence into sound and strategic investments in policies
and practices that will make a difference in the lives of children and their
caregivers.
Abstract
Approximately 20 percent of Americans are affected by mental health and substance use
disorders, which are associated with significant morbidity and mortality. While the evidence
base for the effectiveness of interventions to treat these disorders is sizable, a considerable gap
exists between what is known to be effective and interventions that are actually delivered in
clinical care. Addressing this quality chasm in mental health and substance use care is
particularly critical given the recent passage of the Patient Protection and Affordable Care Act
(ACA) and Mental Health Parity and Addiction Equity Act, which are changing the delivery of
care and access to treatments for mental health and substance use disorders. Increasing
emphasis on accountability and performance measurement, moreover, will require strategies to
promote and measure the quality of psychosocial interventions.
In this report, the study committee develops a framework that can be used to chart a path
toward the ultimate goal of improving the outcomes of psychosocial interventions for those with
mental health and substance use disorders. This framework identifies the key steps entailed in
successfully bringing an evidence-based psychosocial intervention into clinical practice. It
highlights the need to (1) support research to strengthen the evidence base on the efficacy and
effectiveness of psychosocial interventions; (2) based on this evidence, identify the key elements
that drive an intervention’s effect; (3) conduct systematic reviews to inform clinical guidelines
that incorporate these key elements; (4) using the findings of these systematic reviews, develop
quality measures—measures of the structure, process, and outcomes of interventions; and
(5) establish methods for successfully implementing and sustaining these interventions in regular
practice including the training of providers of these interventions. The committee intends for this
framework to be an iterative one, with the results of the process being fed back into the evidence
base and the cycle beginning anew. Central to the framework is the importance of using the
consumer perspective to inform the process.
The recommendations offered in this report are intended to assist policy makers, health
care organizations, and payers that are organizing and overseeing the provision of care for
mental health and substance use disorders while navigating a new health care landscape. The
recommendations also target providers, professional societies, funding agencies, consumers, and
researchers, all of whom have a stake in ensuring that evidence-based, high-quality care is
provided to individuals receiving mental health and substance use services.
Most teens in Bernalillo County do not drink according to a NM-DOH survey, and the Mayor's Youth Advisory Council encourages teens to be above the influence of alcohol and own making good decisions through their website www.AboveTheInfluence.com which promotes an anti-drug and alcohol message.
This document provides an overview of a workshop on building capacity to reduce bullying. The workshop, organized by the Institute of Medicine and National Research Council, brought together representatives from different sectors involved in bullying prevention. Presenters discussed research on effective bullying prevention programs in schools, with peers, families, communities, and online. Student and school personnel panels also shared perspectives. The goal was to identify successful conceptual models and interventions, discuss how to increase protective factors for youth, and explore appropriate roles for different groups in prevention. Over 200 people participated via webcast. The workshop aimed to help address the substantial public health problem of bullying and close remaining knowledge gaps.
Bullying—long tolerated as just a part of growing up—finally has been recognized as a substantial and preventable health problem. Bullying is associated with anxiety, depression, poor school performance, and future
delinquent behavior among its targets, and reports regularly surface of youth who have committed suicide at least in part because of intolerable bullying. Bullying can also have harmful effects on children who bully, on
bystanders, on school climates, and on society at large. Bullying can occur at all ages, from before elementary school to after high school. It can take the form of physical violence, verbal attacks, social isolation, spreading
rumors, or cyber bullying.
Increased concern about bullying has led 49 states and the District of Columbia to enact anti-bullying legislation since 1999. In addition, researchon the causes, consequences, and prevention of bullying has expanded greatly in recent decades. However, major gaps still exist in the understanding of bullying and of interventions that can prevent or mitigate the effectsof bullying.
This publication examines reviewed research on bullying
prevention and intervention efforts as well as efforts in related areas of research and practice, implemented in a range of contexts and settings, including
• Schools
• Peers
• Families
• Communities
• Laws and Public Policies
• Technology
This document analyzes the affordability of alcoholic beverages in the United States from 1950 to 2011. It finds that alcohol has become dramatically more affordable over this period due to declines in real prices. The percentage of mean disposable income required to purchase one drink per day of the cheapest spirits brand fell from 4.46% in 1950 to 0.29% in 2011. Affordability of popular beer and wine brands also increased substantially. Reduced federal and state alcohol tax rates, which were not adjusted for inflation, were a major driver of the declines in real prices and increases in affordability. Higher and indexed tax rates could help mitigate further declines in prices and increases in affordability.
Despite spending far more on medical care than any other nation and despite having seen a century of unparalleled improvement in population health and longevity, the United States has fallen behind many of its global counterparts and competitors in such health outcomes as overall life expectancy and rates of preventable diseases and
injuries.
A fundamental but often overlooked driver of the imbalance
between spending and outcomes is the nation’s inadequate investment in nonclinical strategies that promote health and prevent disease and injury population-wide, strategies that fall under the rubric of “population
health.
Businesses across the nation are involved in every aspect of their communities and the economy and can be powerful partners in terms of improving the health of the nation, said George Isham, a senior advisor at HealthPartners, Inc., a senior fellow at the HealthPartners Institute for Education and Research, and a co-chair of the Institute of Medicine (IOM) Roundtable on Population Health Improvement. On July 30, 2014, the IOM roundtable held a workshop at the New York Academy of Medicine (NYAM) in New York City to consider the role of business in improving population health beyond the usual worksite wellness and health promotion activities.
In welcoming participants to NYAM, the academy’s president, Jo Ivey Boufford, said that economic development is a crucial factor in achieving population health and that there are many opportunities to create win–win situations for business to promote population health in the communities where they live and serve. She added that in New York State business has been a fundamental
part of a large, multi-stakeholder group that is implementing a prevention agenda for the state and helping communities to identify and address priority needs.
Combined with the more traditional employer occupational safety and health protection activities are newer employment-based programs to promote better health through helping workers quit smoking, lose weight, reduce stress, or exercise more regularly. In support of these efforts, some employers have made changes in their policies and facilities to support physical activity and healthier eating, and some employers connect with ommunity resources for health education, health fairs, and
other services. From company to company, the interest in, resources for, and ability to do more for employee health and well-being vary. Employees’ interest in, needs for, and priorities for these types of programs also vary.
1 Introduction and Overview 1
PART I
UNDERSTANDING BULLYING
2 Overview of Bullying and Victimization 9
3 Targets of Bullying and Bullying Behavior 19
PART II
CONTEXTS FOR PREVENTION AND INTERVENTION
4 School-Based Interventions 35
5 Family-Focused Interventions 49
6 Technology-Based Interventions 57
7 Community-Based Interventions 65
8 Peer-Led and Peer-Focused Programs 73
9 Laws and Public Policies 81
PART III
FUTURE DIRECTIONS AND OVERALL THEMES
10 Translating Bullying Research into Policy and Practice 91
11 Reflections of School Personnel and Student Perspectives 103
12 Final Thoughts 113
APPENDIXES
A References 121
B Workshop Agenda 131
C Workshop Statement of Task 139
The document is a report by the World Health Organization (WHO) on preventing suicide globally. It aims to increase awareness of suicide as a major public health problem and encourage countries to develop comprehensive suicide prevention strategies. The report provides a global overview of suicide epidemiology, risk and protective factors, the current state of suicide prevention efforts worldwide, and guidance for countries on creating multisectoral national suicide prevention plans tailored to their resources and contexts.
This paper will introduce
connected learning, a promising approach that
uses digital media to engage young people’s
interests and instill deeper learning skills.
Data-Based Planning for Effective Prevention
State Epidemiological Outcomes Workgroups
Presents the key principles and core expectations of the State Epidemiological Outcomes Workgroups, designed to use data to inform and enhance state and community decisions regarding substance abuse and mental health disorder prevention programs.
Histololgy of Female Reproductive System.pptxAyeshaZaid1
Dive into an in-depth exploration of the histological structure of female reproductive system with this comprehensive lecture. Presented by Dr. Ayesha Irfan, Assistant Professor of Anatomy, this presentation covers the Gross anatomy and functional histology of the female reproductive organs. Ideal for students, educators, and anyone interested in medical science, this lecture provides clear explanations, detailed diagrams, and valuable insights into female reproductive system. Enhance your knowledge and understanding of this essential aspect of human biology.
Nano-gold for Cancer Therapy chemistry investigatory projectSIVAVINAYAKPK
chemistry investigatory project
The development of nanogold-based cancer therapy could revolutionize oncology by providing a more targeted, less invasive treatment option. This project contributes to the growing body of research aimed at harnessing nanotechnology for medical applications, paving the way for future clinical trials and potential commercial applications.
Cancer remains one of the leading causes of death worldwide, prompting the need for innovative treatment methods. Nanotechnology offers promising new approaches, including the use of gold nanoparticles (nanogold) for targeted cancer therapy. Nanogold particles possess unique physical and chemical properties that make them suitable for drug delivery, imaging, and photothermal therapy.
Discover the benefits of homeopathic medicine for irregular periods with our guide on 5 common remedies. Learn how these natural treatments can help regulate menstrual cycles and improve overall menstrual health.
Visit Us: https://drdeepikashomeopathy.com/service/irregular-periods-treatment/
PGx Analysis in VarSeq: A User’s PerspectiveGolden Helix
Since our release of the PGx capabilities in VarSeq, we’ve had a few months to gather some insights from various use cases. Some users approach PGx workflows by means of array genotyping or what seems to be a growing trend of adding the star allele calling to the existing NGS pipeline for whole genome data. Luckily, both approaches are supported with the VarSeq software platform. The genotyping method being used will also dictate what the scope of the tertiary analysis will be. For example, are your PGx reports a standalone pipeline or would your lab’s goal be to handle a dual-purpose workflow and report on PGx + Diagnostic findings.
The purpose of this webcast is to:
Discuss and demonstrate the approaches with array and NGS genotyping methods for star allele calling to prep for downstream analysis.
Following genotyping, explore alternative tertiary workflow concepts in VarSeq to handle PGx reporting.
Moreover, we will include insights users will need to consider when validating their PGx workflow for all possible star alleles and options you have for automating your PGx analysis for large number of samples. Please join us for a session dedicated to the application of star allele genotyping and subsequent PGx workflows in our VarSeq software.
The skin is the largest organ and its health plays a vital role among the other sense organs. The skin concerns like acne breakout, psoriasis, or anything similar along the lines, finding a qualified and experienced dermatologist becomes paramount.
The biomechanics of running involves the study of the mechanical principles underlying running movements. It includes the analysis of the running gait cycle, which consists of the stance phase (foot contact to push-off) and the swing phase (foot lift-off to next contact). Key aspects include kinematics (joint angles and movements, stride length and frequency) and kinetics (forces involved in running, including ground reaction and muscle forces). Understanding these factors helps in improving running performance, optimizing technique, and preventing injuries.
Pictorial and detailed description of patellar instability with sign and symptoms and how to diagnose , what investigations you should go with and how to approach with treatment options . I have presented this slide in my 2nd year junior residency in orthopedics at LLRM medical college Meerut and got good reviews for it
After getting it read you will definitely understand the topic.
1. How to Hire
The Right Evaluator
For Your
Tobacco Prevention and Control Program
Jamey Wise, MS
Director of Evaluation,
Florida Tobacco Pilot Program
Bureau of Epidemiology
Florida Department of Health
February 2000
2. How to Hire the Right Evaluator
How to Hire the Right Evaluator
for Your Tobacco Prevention and Control Program1
Introduction
Program evaluation is an important aspect of every tobacco prevention and control program. The essential
components of successful programs can be identified and replicated through well-designed, systematic evalua-
tion. Evaluation also identifies less effective program elements that need to be modified. Ultimately, evaluation
results provide stakeholders − including the intended audience, community leaders and policy makers − with
evidence of the impacts of tobacco prevention and control activities, and help them make decisions about how
to improve these activities to achieve the desired outcomes.
Many programs find hiring an external evaluator to be an effective means of obtaining a useful evaluation.
An external evaluator must be chosen carefully, however, because program personnel can easily become dis-
satisfied with an inappropriate or inadequate evaluation and may come to question the usefulness of the results
obtained.
Well-planned and well-conducted evaluations are invaluable in determining where a tobacco prevention and
control program succeeds and how it can be improved. The following four guidelines can assist you in selecting
and working with an appropriate external evaluator.
1. Define the evaluation.
2. Select an evaluator.
3. Work closely with the evaluator.
4. Use the results.
Guideline 1: Define the evaluation.
Defining the evaluation includes three primary steps:
a) defining the focus of the evaluation;
b) defining the evaluation activities; and
c) developing a request for evaluation proposals.
Evaluation Focus
The focus of the evaluation is what you want to have evaluated. You need a clear understanding of what
you want the evaluation to accomplish. Do you want an assessment of how well you planned and implemented
your services? Do you want an assessment of the results of your services?
Keep in mind that the evaluation should be decision-focused. At each step of the design process ask your-
self how the data collected will inform programmatic decisions. Evaluation can assist you in making many types
of decisions, including:
1. information decisions: options for changes in collection and use of program performance information.
2. management decisions: options for changes in program activities; and
3. policy decisions: options for changes in program resources or objectives.
1
This guide is based on an anonymously authored document, “Hiring the Right Evaluator for Your Program,” developed for the Corporation for
National Service, to which I contributed advice but did not author. I would also like to express my appreciation to the people who provided
helpful comments: Edward Trapido, Norman Weatherby, and Sharon Luebbers.
1
3. How to Hire the Right Evaluator
The focus in defining the scope of the evaluation is to identify the issues intended to be the focus of the
evaluation. Once the focus is established, then the key evaluation questions follow:
♦ How well are we managing our program?
♦ How well did we plan and implement our program?
♦ What are the outcomes achieved by our program?
An evaluation can address any or all of these questions. These questions will eventually serve as the basis
for the evaluator to elaborate more detailed questions in an evaluation plan. Once you know what questions
you want answered, then you are ready to define the evaluation activities.
Evaluation Activities
Evaluation activities are the scope of work expected from the evaluator. These tasks should relate directly
to the conduct of the evaluation. The evaluator may be expected to:
♦ develop the evaluation plan;
♦ develop evaluation instruments; Using an Evaluation Committee
♦ select the sampling procedures and draw The question of what should be evaluated is often best an-
swered by committee, and an Evaluation Committee can be a
the evaluation sample;
very valuable work group for all aspects of the evaluation.
♦ collect evaluation data; An evaluation committee is useful for overseeing the entire
♦ analyze evaluation data; evaluation process, from initial planning through implementa-
tion and crafting of the final report. Persons representing the
♦ compose the evaluation report; and various aspects of your program, including Partnership and
♦ present the evaluation results. SWAT Members, supervisors and staff, partner agency staff,
and other key people involved in programmatic planning best
Requesting Evaluation Proposals constitute this committee.
The committee's size will depend upon the size of your
Once the evaluation tasks have been program. A large program might support an evaluation com-
determined, you are ready to develop a request for mittee of five to seven persons, chaired by the program direc-
evaluation proposals (RFP)2. The RFP is designed tor. Smaller programs, of course, may require no more than
to solicit written evaluation proposals from which three people; some, only a single staff member. In other cases,
you can select the best candidate. The RFP must the evaluation committee might need to include the entire pro-
follow your agency’s purchasing guidelines, and gram staff.
should include a statement of the evaluation focus, The evaluation committee has primary responsibility for all
scope of work, and any other requirements. An aspects of the evaluation and receives regular reports from the
overall description of your tobacco prevention and external evaluator. Committee members must be actively in-
control program, an estimate of the available volved in the evaluation process to increase their understand-
ing of it and to allow them to feel ownership of, and make use
evaluation funds, and all required meetings (e.g.
of, evaluation results.
planning meeting, regular updates, interim report
presentation, final debrief) should also be included in this document.
The RFP should require evaluators to include several main sections in their evaluation proposal:
1. Statement of the purpose of the evaluation. (This section ensures that the evaluator understands the
intended focus of the evaluation being sought.)
2. Statement of the program goals and objectives that are relevant to the focus of the evaluation. (This
section ensures that the evaluator understands the program design.)
3. Statement of proposed evaluation questions. These should be developed from the information provided
under Guideline 2.
4. Proposed methodology for addressing the evaluation questions.
2
In Florida, the service procurement document may take the form of a less formal “invitation to negotiate” or more formal “request for proposals”
(or some other form). Check with your procurement office to determine which approach you should use. The steps outlined here work well under a
number of procurement methods.
2
4. How to Hire the Right Evaluator
5. Proposed timeline of evaluation activities.
6. Proposed deliverables – typically a) evaluation instruments; b) data set; c) monthly reports; d) interim
report (if needed); and e) final report.
7. Proposed budget.
Once the RFP has been completed, it should be distributed to a variety of local organizations. Suitable can-
didates with good credentials in program evaluation can usually be found in social science research organiza-
tions, institutions of higher education, and independent consultants. Try using your existing network to identify
persons who have conducted program evaluations for other organizations in your community or for other to-
bacco prevention and control programs.
Helpful Hint: Under some circumstances it is not necessary that an RFP or other formal competitive
process be used. Check with your budget or contract office for advice on the appropriate procedure for
your program. If you do not conduct an RFP, the review and selection guidelines outlined her are still
applicable – even for only one application.
Guideline 2: Select an evaluator.
There are three basic steps in selecting the evaluator.
a) reviewing and ranking the applications;
b) interviewing the candidates; and
c) negotiating and writing a contract.
An evaluator is not actually selected until a contract is executed. Steps a and b are used to assess and rank
candidates. During these phases of the selection process, the evaluators are also deciding whether to select
you. If you and the evaluator both agree that a good match is in the offing, then steps a and b become the
background work for step c, developing the contract. Each of these steps is described below.
Reviewing and Ranking the Applicants
Depending on your available resources and the number of available evaluators in your area, you will proba-
bly receive several applications or proposals. I recommend a two-step review process involving a review of the
proposals and then formal interviews with the top candidates. You would then use both the results of the pro-
posal review and of the interviews to hire the right evaluator for your program.
The first step in selecting from among the applicants is to winnow down the proposals to the top three ap-
plicants through a thorough review of the proposals. The following review criteria will allow to you assess the
proposals based on the requirements established in the RFP. A sample evaluation review form is provided at
the end of this guide which includes these seven criteria and six additional criteria for interviewing the appli-
cants.
1. How well does the evaluator understand the focus of the evaluation?
If your intent is to assess how well services were planned and implemented, does the evaluator clearly
state this focus in the proposal? If you want an assessment of the results or impact of the services,
does the evaluator clearly state this focus in the proposal? Does the evaluator propose that the evalua-
tion will be decision-focused − that is, that the evaluation findings will help you make decisions about
your program, including the need for additional information, what activities might be changed, or new
objectives that might be implemented?
Importantly, the applicant's statement about the focus of the evaluation will be an indication of
whether the evaluator understands the distinction between research and evaluation. Research is con-
ducted primarily to expand our knowledge of a topic or program. Evaluation will help you decide what
information to use and how to use it, how you might improve service activities, or how you might
3
5. How to Hire the Right Evaluator
change your program's objectives. It is crucial that the evaluator understand that evaluation must help
you make decisions about how to improve your program.
Program Evaluation or Research?
Evaluation is closely related to, but distinguishable from social research. Evaluation utilizes many of the same
methods used in social research. According to Bill Trochim, evaluation takes place within a political and organ-
izational context and therefore it requires additional skills, including group process skills, management ability,
political savvy, and sensitivity to multiple stakeholders.
A common definition of program evaluation is: the systematic assessment of the worth or merit of
some human service program in order to improve it. (Adapted from Michael Scriven)
In contrast, a common definition of research is: any systematic activity designed to develop or con-
tribute to generalizeable knowledge. (Federal Regulations Regarding Human Subjects Protection)
Research emphasizes acquiring and assessing information. The primary purpose of research is to generate new
knowledge. Evaluation emphasizes assessing worth or merit in order to improve. The primary purpose of
evaluation is to determine the value of program strategies and improve the ones that can be improved.
2. How well does the evaluator understand the program goals and objectives?
A good evaluation will be based on the goals and objectives you have established for your program.
Does the evaluator communicate in the proposal a clear understanding of your program's goals and
objectives?
Does the evaluator understand your program's priorities? For example, if you emphasize retailer edu-
cation over school-based instructional programs, the evaluation proposal should note these priorities.
3. How well do the proposed evaluation questions address the program goals and objectives?
Evaluation questions are based on program goals and objectives. The most direct evaluation questions
are simply objectives written as questions. For example, To what extent have comprehensive tobacco
use prevention education programs been implemented in grades 9-12? (implementation assessment).
Or, How have community leaders' attitudes toward tobacco changed during the past 12 months? (out-
come assessment).
4. How appropriate is the proposed methodology for the evaluation questions?
This can be a difficult issue to assess. Selecting evaluation methodologies can be a lot like selecting
foods; not only are the needs of the people (program) important, but also their individual tastes. How-
ever, the evaluator should clearly explain in the proposal why the methods being proposed are the
most appropriate. In general, you will readily see the logic behind using surveys, records reviews,
pre/post tests, or other approaches for answering questions about your program. Being intentionally
redundant, whatever approach is used should yield information that you can use to make information,
management, or policy decisions about your program.
If you are not familiar or comfortable with assessing the proposed methodology, then I recommend
that you enlist the services of an experienced evaluator to assist you with the review of proposals. An
evaluator on the review team will serve two primary purposes: 1) assessing the appropriateness of the
methodology; and 2) making recommendations for improving the evaluation methodology, if improve-
ments are needed.
5. How reasonable is the timeline of evaluation activities?
The reasonableness of timelines depends on several factors, including your program budget period,
when you need the evaluation results, and the evaluation methods used. Don't rush the evaluation un-
necessarily, but do allow yourself enough time after the evaluation is completed to use the results for
your program planning.
4
6. How to Hire the Right Evaluator
6. How relevant and useful are the proposed deliverables?
The most common evaluation deliverable is a written report. I recommend that you look for several
types of reports, each of which is used for a different purpose. Obviously, the most important reports
are the final reports which present the evaluation results and the conclusions and recommendations of
the evaluator. To ensure that the evaluation activities are being conducted as planned and within time-
frames and budget, you should get monthly updates (these generally are brief reports). If the method-
ology includes a pretest and posttest, then I recommend that you ask for a report of the pretest results
as well as a report of the analysis of both the pretest and posttest. If the evaluation uses procedures
that build on one another (for example, conducting a focus group and using the results to develop a
survey) then a report on each component will be useful. For evaluations being conducted over a long
time period, a comprehensive interim report might be useful.
Helpful Hint: The larger the scope of the evaluation, or the more complex the methodology, the more
there will be to report; it is commonly easier to digest several shorter and more focused reports than
one large, detailed report.
7. How reasonable is the proposed budget?
Does the candidate believe your evaluation can be conducted for the available funds? Candidates must
indicate that their proposed evaluation approach can be carried out for the funds you indicated would
be available (or less!). You might find that a proposed evaluation plan is excellent, but unfeasible under
your anticipated budget.
Interviewing the Applicants
The second step in selecting from among the applicants is to conduct formal interviews with the top three
candidates. The interviews should be used to clarify any concerns or questions raised during the proposal re-
view and to gauge evaluators' ability to work with your program.
Request that each candidate interviewed provide you with previous evaluation reports they have written
and with references of people or organizations for whom they have previously conducted evaluation projects.
8. How useful are the candidate’s previous evaluation reports?
Look for evaluation reports for which the candidate served as lead author or as project direc-
tor/manager (typically the second author in a report from a research institute). Assess the reports for
clarity, organization, readability, and potential usefulness for decision makers. Pay particular attention
to how well the reports would help a program improve its information collection, activities, or objec-
tives. Candidates providing technical, poorly written, disorganized, difficult-to-understand, or lengthy
evaluation reports will likely compose similar reports for your evaluation.
9. Does the candidate have good references?
References should be contacted to obtain their experiences with and opinions about the candidates.
Some questions you might ask the references include:
a. How well did the evaluation approach used by the evaluator address the needs and desires of your
organization?
b. Was the evaluation conducted in a timely fashion?
c. Was the evaluation conducted within your budget?
d. What was the most useful aspect of the evaluation report? What changes, if any, did you make as
a result of the evaluator's recommendations?
e. Would you hire the evaluator to conduct another evaluation for you?
5
7. How to Hire the Right Evaluator
The actual interview might begin with the evaluators presenting their proposed evaluation plan. Such pres-
entations provide you with much invaluable information about the candidates. You will learn about the evaluat-
ors' ability to organize information, communicate ideas, and respond to questions. Invite your staff, supervi-
sors, and program partners to the presentation to get their input. Remember: they also will be using the re-
sults.
10. How effectively does the evaluator communicate with you and other stakeholders?
If the discussion becomes very technical and a candidate is unable to present information in an easily
understood manner, it is unlikely that this candidate will meet your needs. A candidate unable to com-
municate effectively at this time will probably not overcome the problem during the evaluation. Effec-
tive communication is a key for success, and the interview gives committee members a good idea of
how effectively a candidate can communicate.
In addition to whatever questions might arise during the presentation, the following questions might also
be useful.
11. What is the candidate's prior evaluation experience?
Experience is an important factor to consider. A candidate probably will not have performed exactly the
same evaluation that you require, but many similarities between previous programs and your own can
be found. A candidate that has performed evaluations of substance abuse prevention programs, for ex-
ample, would have experiences that translate directly to tobacco prevention and control programs. De-
pending on the emphases of your program, evaluation experience with law enforcement programs,
school or community based educational programs, marketing, or community building might be relevant
experience. The candidate's prior experience will be your main opportunity to discover and weigh that
person's strengths and weaknesses.
12. Will the candidate's existing professional commitments interfere with the planned evaluation?
Good program evaluators are usually in high demand. A candidate who is engaged in several projects,
however, may be unable to devote sufficient time to your program evaluation. However, you should
expect that the quality of work you receive is comparable to any other project in which an evaluator is
engaged.
Ask the candidate to describe current and expected professional commitments. If the commitments
seem excessive, ask how the candidate plans to conduct your program evaluation along with these
other tasks. If the candidate indicates that other persons will be used to assist with the evaluation,
determine which tasks win be performed by whom. Also determine if these other persons are capable
of performing the tasks assigned to them, including requesting copies of resumes or vitas from the
candidate's associates. Using a team of trained and experienced persons to perform an evaluation is
common, but the team leader (i.e., the candidate) should be involved in all tasks that the you believe
require this person's direct involvement.
Finally, it is important to ensure that the evaluator you select does not have a conflict of interest. An
evaluation is truly independent when the program evaluator has no stake in the program’s success or
failure. For example, if the evaluator were a member of the management team, then he or she would
obviously have a clear interest in the program results – both in terms of prestige and continued em-
ployment. Such an interest in the program outcomes can affect the decisions of event the most honor-
able evaluators. A conflict of interest questionnaire is included at the end of this guide that you can ask
all evaluators to complete and include with their proposal.
13. What is your general reaction to the candidate?
During interviews, be alert to the candidate's ability to communicate in a straightforward manner, and
be alert to your own expectations of how effectively you and your colleagues can work with this per-
son. A clash of working styles can certainly be a problem, and the chemistry between a candidate and
your program's stakeholders should not be ignored.
6
8. How to Hire the Right Evaluator
Following the interview process, reviewers should individually rate the candidates on all of the issues previ-
ously described. Candidates might be rated on a five-point scale, ranging from "Definitely hire as our evaluator"
to "Definitely do not hire as our evaluator." "No opinion" should be the midpoint. (See Attachment 1 for a sam-
ple form for rating candidates.) Reviewers scores can be combined in a number of ways, from simply adding up
the total points for each criterion, to averaging the points for each criterion, to asking the reviewers to discuss
their individual ratings and then reach consensus on how a candidate should be rated on each criterion. After
combining reviewers' individual scores, the candidates should be ranked so that the project can be offered to
the candidate most acceptable to the reviewers.
Negotiating and Writing a Contract
The desired relationship between the evaluation committee and the external evaluator is one of partnership
and should be reflected as such in the contract. The contract should state, in a single paragraph if possible, the
evaluator's general responsibilities. If you have the evaluator produce a work plan, it can be incorporated into
your agreement. The contract should list the contract deliverables and provide a timetable for each one. Con-
sider including language describing how changes in the scope of work or work plan will be handled. Many
evaluation contracts also specify who owns the data gathered during the evaluation, as well as who has the
right to publish the results of the evaluation study. Finally, indicate how the evaluator will bill for services ren-
dered and include a schedule of payments. It is common to withhold between twenty and thirty percent of the
evaluator's fee until all deliverables have been submitted and reviewed to ensure that the deliverables respond
to the contracted scope of work.
The contract should also detail the program's responsibilities. These responsibilities might include providing
the external evaluator with timely and appropriate guidance and reviewing and approving evaluation instru-
ments and documents in a timely and constructive manner. Your responsibilities might also include providing
the evaluator with program appropriate program records and other information, and assisting the evaluator in
solving problems that arise during the evaluation.
Guideline 3: Work closely with the evaluator.
Successful evaluations are the result of good partnerships. Your involvement − and that of other
program stakeholders − is essential to the success of the evaluation. Your involvement in the evaluation proc-
ess should not be limited to periodic meetings with the evaluator. The greater your participation in all of the
various aspects of the evaluation, from planning to final report, the more complete the evaluation will be, and
the greater your understanding of the evaluation results and recommendations will be.
You should continually monitor the evaluator and the evaluation. Review all major work elements, including
sampling plans, instrumentation, data collection plans, etc. You and your staff and partners can be of consider-
able service to the evaluator by facilitating involvement with host sites, service recipients, and others from
whom the evaluator wishes to collect information.
Solicit feedback from your evaluator as the study progresses. Informal insights gained during the study pe-
riod can be valuable. Finally, manage the process actively to make sure you get what you need. A committee
that is intermittently involved in the evaluation process may discover that the study has gone in an inappropri-
ate direction after considerable time, money, and effort have been expended.
As your evaluation nears completion, you and the evaluator should agree on a format for the evaluation re-
port. The evaluation report should address evaluation questions directly and briefly and should be understand-
able to the target audience. Any report, of course, must provide useful and direct guidance for program deci-
sion makers. You and the evaluator should also agree at this time on the evaluator's role in the release of the
evaluation's results. You might request that the evaluator be available to meet with decision makers, conduct
interviews with news media, and make public presentations of the results.
Be involved in the development of the final report. This is not to say that you decide what the evaluator re-
ports; the point of an external evaluation is to get an independent assessment your program. However, your
knowledge of the target audience will be valuable in framing the report, and your knowledge of policy makers'
7
9. How to Hire the Right Evaluator
future plans for the program will help set the context for any recommendations made by the evaluators. You
should carefully review a draft of the final report and provide detailed, written feedback to the evaluator to use
in developing the final product.
A cautionary note: If the changes suggested to the evaluator are significant and would have the effect of
changing the findings, recommendations, or overall focus of the report, they should be discussed at a meeting
with the evaluator. If the evaluator does not believe that the suggested changes are consistent with the data,
the evaluator has the right (and an ethical obligation) to be disassociated from the report. In such an instance,
the evaluator may make the requested changes and assign authorship of the report to the program, or may
hand over all work to the program to complete the report. Such situations should be avoided, however, as they
usually place the integrity and public acceptance of the report in jeopardy.
Guideline 4: Use the evaluation results.
Once your study is complete, its value will be very limited unless the results are used to improve your pro-
gram. Use the insights gained to review your program design and modify, or replicate, program elements as
appropriate. Use your evaluator as a resource in this activity. Your evaluator's assistance in interpreting the
results for their program management implications can be very valuable. Address the most compelling findings
first. Don't get overwhelmed trying to do everything at once. Make note of findings that suggest further areas
of investigation. Future evaluations may shed light on unclear data.
Use your study not only to improve operations, but to market your program. Potential funders and service
partners will be more willing to become involved with your program if you can show them evidence of effec-
tiveness and a willingness to make improvements based on evaluation data.
Report the results of your evaluation not only to your staff and other program stakeholders, but also to fi-
nancial contributors, key community leaders, and organizational partners. Contact your local newspaper about
the possibility of doing a feature article on your program. Have program participants − especially youth mem-
bers − develop exhibitions including the evaluation results for use in community presentations and other activi-
ties. Present your results at professional conferences and other "issue networks" (groups with a common inter-
est).
Conclusion
A program evaluation properly designed to provide interested individuals with information that they will use
will likely be a successful evaluation. However, while utilization of the results of the evaluation is clearly the
primary goal of an evaluation, important benefits may also be gained from the evaluation process itself. The
process of undergoing an evaluation can, for example, build shared meaning and understanding, support and
enhance the program (by building evaluation-based data collection and analysis into the program design), and
support human and organizational development by training staff in new skills. A well conceived evaluation will
provide benefits no matter the results.
A well-conceived evaluation results from a good working partnerships between program stakeholders and
evaluators. An effective functional partnership is founded on agreement on the objectives of the evaluation, an
understanding of the responsibilities and authority of each partner, and mutual respect for the contributions
that each partner provides to the evaluation. If the steps identified here are followed, the evaluation process
should be enjoyable and productive for both the program and the external evaluator.
8
10. How to Hire the Right Evaluator
Evaluator Rating Guide Candidate Name:_______________________________________
2 1 0 -1 -2
1. How well does the evaluator under-
Clearly understands the No Opinion Has no understanding of
stand the focus of the evaluation? focus, including the dis- the evaluation focus or
tinction between evalua- the distinction between
tion and research evaluation and research
2 1 0 -1 -2
2. How well does the evaluator under-
Has an excellent under- No Opinion Has no understanding of
stand the program goals and objec- standing of program program goals and ob-
tives? goals and objectives jectives
2 1 0 -1 -2
3. How well do the proposed evaluation
Evaluation questions fully No Opinion Evaluation questions to
questions address the program goals address program goals not address program
and objectives? and objectives goals and objectives
2 1 0 -1 -2
4. How appropriate is the proposed
Appropriate: can clearly No Opinion Not appropriate: cannot
methodology for the evaluation see that the results ob- see how results obtained
questions? tained will be useful might help program
2 1 0 -1 -2
5. How reasonable is the timeline of
The evaluation can be No Opinion The evaluation cannot be
evaluation activities? conducted well within the completed within the
proposed time frame proposed time frame
2 1 0 -1 -2
6. How relevant and useful are the
Very useful for program No Opinion Not useful for program
proposed deliverables? improvement improvement
2 1 0 -1 -2
7. How reasonable is the proposed
Evaluation is very likely No Opinion Evaluation can't be con-
budget? to be completed within ducted with available
the available resources resources
2 1 0 -1 -2
8. How useful are the candidate’s pre-
Previous reports were No Opinion Previous reports were not
vious evaluation reports? understood and used understood or used
2 1 0 -1 -2
9. Does the candidate have good refer-
Received excellent No Opinion Received poor references
ences? references
2 1 0 -1 -2
10. How effectively does the evaluator
Communicates effectively No Opinion Communicates effectively
communicate with you and other with most stakeholders with few stakeholders
stakeholders?
2 1 0 -1 -2
11. What is the candidate's prior evalua-
Has extensive experience No Opinion Has limited evaluation
tion experience? evaluating similar pro- experience
grams
2 1 0 -1 -2
12. Will the candidate's existing profes-
Commitments will not No Opinion Commitments will inter-
sional commitments interfere with interfere with evaluation fere with evaluation
the planned evaluation?
2 1 0 -1 -2
13. What is your general reaction to the
Definitely hire as our No Opinion Definitely DO NOT hire
candidate? evaluator has our evaluator
9
11. How to Hire the Right Evaluator
Evaluator Conflict of Interest Questionnaire
Name of Program:_______________________________________
Yes No
1. Do you, your immediate family, or business partners have financial or
other interests in the program to be evaluated?
2. Have you been employed by the program within the last 24 months?
3. Do you plan to obtain a financial interest, e.g. stock, in the program?
4. Do you plan to seek or accept future employment the program?
5. Are there any other conditions that may cause a conflict of interest?
If you answered Yes to any of the above questions, please provide a written explanation of
your answer.
I declare all of the above questions are answered truthfully and to the best of my knowledge.
signature date
10