This introduction discusses the challenges and benefits of measurement and evaluation for training programs. While evaluation provides useful accountability data, it also requires time and resources. The main challenges are: too many complex evaluation theories; lack of understanding of evaluation; searching for unnecessary statistical precision; treating evaluation as a post-program activity rather than planning for it early; failure to see the long-term payoff; lack of support from stakeholders; and not providing the data that senior managers require like impact on business units and return on investment. Overcoming these challenges can help training functions embrace evaluation and use it to improve programs and demonstrate their value.
Advancing the Methods of Evaluation of Quality and Safety Practice and Educa...Daniel McLinden
Improving healthcare in an organization requires individuals with the capability to design, test and implement improved processes in an organization with the capacity to support the scale and spread of improvement. If improvement capability is not widespread in the workforce then an intervention is needed to create the capability. In response to this challenge, Cincinnati Children’s designed and implemented a comprehensive Improvement Science curriculum to build capability. The program has achieved measurable improvements in both process and outcome measures of patient care and business processes. Incorporating unique design principles, this intervention served as a catalyst for quality transformation.
In this workshop we will share our perspective and provide examples with data that illustrates:
• Building support and buy-in through the design of participant selection.
• Creating an intervention to build capability that includes training but involves more than training.
• A comprehensive model based on competencies
• Expanding the four-level Kirkpatrick model evaluation with additional levels that encompass economic impact and network impact.
• Using self-assessment to evaluate learning outcomes.
Evaluating the quality of quality improvement training in healthcareDaniel McLinden
Quality Improvement (QI)in healthcare is an increasingly important approach to improving health outcomes, improving system performance and improving safety for patients. Effectively implementing QI methods requires knowledge of methods for the design and execution of QI projects. Given that this capability is not yet widespread in healthcare, training programs have emerged to develop these skills in the healthcare workforce. In spite of the growth of training programs, limited evidence exists about the merit and worth of these programs. We report here on a multi-year, multi-method evaluation of a QI training program at a large Midwestern academic medical center. Our methodology will demonstrate an approach to organizing a large scale training evaluation. Our results will provide best available evidence for features of the intervention, outcomes and the contextual features that enhance or limit efficacy.
Forecasting - Estimating the future value of training investments: Creating ...Daniel McLinden
Implementing a program and then waiting a year to find out if that program has made a difference may not be a practical response to scrutiny from the executive suite. A better way to respond is to include forecasting when planning investments intended to develop human capital. Forecasting impact and ROI illustrates what resources, processes, and supports are necessary to achieve impact. This session will show you how to adeptly take evaluation methods and move these earlier in the program’s life-cycle. We will demonstrate how to apply ROI evaluation methods to forecasting and complement these methods with additional methods. In particular, we will cover the use of logic modeling to map the causal chain of events from investment to impact and ROI. A case example will illustrate each of the steps of assessing the economics; namely, defining current and anticipated costs, estimating future benefits, calculating the future Return on Investment (ROI), and conducting a sensitivity analysis.
Concept Maps As Network Data: Applying Social Network Analysis to a Network ...Daniel McLinden
Concept Mapping is a method that creates a visual representation that illustrates the thoughts, ideas, or planned actions that arise from a group of stakeholders on a particular issue. Social Network Analysis is a method that likewise creates a visual representation of data; a network map typically represents people and the connections, or lack thereof, between these people. While the goals of these two methods differ, the underlying data structures are similar; a network of relationships between data elements. Social network analysis is explored here as a supplement to concept mapping. A secondary analysis of a concept map was conducted using social network analysis. The methods and the implications for supplementing the analysis of concept maps and debriefing results with stakeholders are discussed.
Exploring the Economics of Quality Improvement Education in Healthcare: An A...Daniel McLinden
What are the economics associated with a program intended to influence large scale organizational change in a healthcare setting? This work reports on the exploration of the economic linkages among the resources used and the benefits achieved from a training intervention. The training program is intended to develop quality improvement capability among training participants in a medical center. This economic evaluation involves the application of utility analysis to value the costs of the program and to estimate the benefit as the value of trained individual. Utility analysis was further enhanced by integrating the analysis within a dynamic system’s model. This extension provided a more precise understanding of the economics over time as training participants flow through a training intervention and then back into the workplace. Finally we explore the potential to quantify the linkage between interventions with learners and the impact of large scale change as a means for considering the value of the intervention.
Evaluating opportunities to optimize learning and economic impact: Applyin...Daniel McLinden
Systems modeling provides the evaluator with a method to empirically explore potential impact of an intervention and, in particular, the economics of features and benefits of that intervention. Assumptions that support the intervention can be tested and the varying perspectives of multiple stakeholders can be systematically evaluated as a means to both quantify the effects of their unique perspectives and also to promote consensus building and dialogue among stakeholders that is grounded in empirical evidence. This session will review the work done in a medical center to model the economics of required training programs for new employees. The specific aim was to identify the optimum timing for compliance based training utilizing economic considerations and this model as the basis for beginning discussions of policy changes within the organization. Additionally modeling the tradeoffs between optimizing economic variables and optimizing non-economic variables such as learning will be discussed.
This document provides an overview of using analytics to improve student success. It discusses five key components of an analytics model: gather, predict, act, monitor, and refine. Under gather, it explains the importance of collecting various data from multiple sources. Predict involves building models to forecast outcomes based on the gathered data. Act refers to developing data-driven responses and interventions. Monitor is about continuously tracking results in a formative and summative manner. Refine cycles back to improving the other components as needed. The document aims to help conceptualize an analytics approach and strategically apply data to enhance student achievement.
This document discusses measuring the outcomes and impacts of extension programs. It begins by outlining the objectives of measuring outcomes and impacts, and discussing how to time evaluations using a logic model. It then describes 12 major evaluation techniques including surveys, case studies, interviews and observations. The document emphasizes the importance of demonstrating measurable results to win public support. It provides guidance on determining indicators and applying four types of evaluation: needs assessment, process evaluation, outcome evaluation, and impact evaluation. Throughout, it stresses applying a logic model approach and considering utility, feasibility, appropriateness and accuracy when designing program evaluations.
Advancing the Methods of Evaluation of Quality and Safety Practice and Educa...Daniel McLinden
Improving healthcare in an organization requires individuals with the capability to design, test and implement improved processes in an organization with the capacity to support the scale and spread of improvement. If improvement capability is not widespread in the workforce then an intervention is needed to create the capability. In response to this challenge, Cincinnati Children’s designed and implemented a comprehensive Improvement Science curriculum to build capability. The program has achieved measurable improvements in both process and outcome measures of patient care and business processes. Incorporating unique design principles, this intervention served as a catalyst for quality transformation.
In this workshop we will share our perspective and provide examples with data that illustrates:
• Building support and buy-in through the design of participant selection.
• Creating an intervention to build capability that includes training but involves more than training.
• A comprehensive model based on competencies
• Expanding the four-level Kirkpatrick model evaluation with additional levels that encompass economic impact and network impact.
• Using self-assessment to evaluate learning outcomes.
Evaluating the quality of quality improvement training in healthcareDaniel McLinden
Quality Improvement (QI)in healthcare is an increasingly important approach to improving health outcomes, improving system performance and improving safety for patients. Effectively implementing QI methods requires knowledge of methods for the design and execution of QI projects. Given that this capability is not yet widespread in healthcare, training programs have emerged to develop these skills in the healthcare workforce. In spite of the growth of training programs, limited evidence exists about the merit and worth of these programs. We report here on a multi-year, multi-method evaluation of a QI training program at a large Midwestern academic medical center. Our methodology will demonstrate an approach to organizing a large scale training evaluation. Our results will provide best available evidence for features of the intervention, outcomes and the contextual features that enhance or limit efficacy.
Forecasting - Estimating the future value of training investments: Creating ...Daniel McLinden
Implementing a program and then waiting a year to find out if that program has made a difference may not be a practical response to scrutiny from the executive suite. A better way to respond is to include forecasting when planning investments intended to develop human capital. Forecasting impact and ROI illustrates what resources, processes, and supports are necessary to achieve impact. This session will show you how to adeptly take evaluation methods and move these earlier in the program’s life-cycle. We will demonstrate how to apply ROI evaluation methods to forecasting and complement these methods with additional methods. In particular, we will cover the use of logic modeling to map the causal chain of events from investment to impact and ROI. A case example will illustrate each of the steps of assessing the economics; namely, defining current and anticipated costs, estimating future benefits, calculating the future Return on Investment (ROI), and conducting a sensitivity analysis.
Concept Maps As Network Data: Applying Social Network Analysis to a Network ...Daniel McLinden
Concept Mapping is a method that creates a visual representation that illustrates the thoughts, ideas, or planned actions that arise from a group of stakeholders on a particular issue. Social Network Analysis is a method that likewise creates a visual representation of data; a network map typically represents people and the connections, or lack thereof, between these people. While the goals of these two methods differ, the underlying data structures are similar; a network of relationships between data elements. Social network analysis is explored here as a supplement to concept mapping. A secondary analysis of a concept map was conducted using social network analysis. The methods and the implications for supplementing the analysis of concept maps and debriefing results with stakeholders are discussed.
Exploring the Economics of Quality Improvement Education in Healthcare: An A...Daniel McLinden
What are the economics associated with a program intended to influence large scale organizational change in a healthcare setting? This work reports on the exploration of the economic linkages among the resources used and the benefits achieved from a training intervention. The training program is intended to develop quality improvement capability among training participants in a medical center. This economic evaluation involves the application of utility analysis to value the costs of the program and to estimate the benefit as the value of trained individual. Utility analysis was further enhanced by integrating the analysis within a dynamic system’s model. This extension provided a more precise understanding of the economics over time as training participants flow through a training intervention and then back into the workplace. Finally we explore the potential to quantify the linkage between interventions with learners and the impact of large scale change as a means for considering the value of the intervention.
Evaluating opportunities to optimize learning and economic impact: Applyin...Daniel McLinden
Systems modeling provides the evaluator with a method to empirically explore potential impact of an intervention and, in particular, the economics of features and benefits of that intervention. Assumptions that support the intervention can be tested and the varying perspectives of multiple stakeholders can be systematically evaluated as a means to both quantify the effects of their unique perspectives and also to promote consensus building and dialogue among stakeholders that is grounded in empirical evidence. This session will review the work done in a medical center to model the economics of required training programs for new employees. The specific aim was to identify the optimum timing for compliance based training utilizing economic considerations and this model as the basis for beginning discussions of policy changes within the organization. Additionally modeling the tradeoffs between optimizing economic variables and optimizing non-economic variables such as learning will be discussed.
This document provides an overview of using analytics to improve student success. It discusses five key components of an analytics model: gather, predict, act, monitor, and refine. Under gather, it explains the importance of collecting various data from multiple sources. Predict involves building models to forecast outcomes based on the gathered data. Act refers to developing data-driven responses and interventions. Monitor is about continuously tracking results in a formative and summative manner. Refine cycles back to improving the other components as needed. The document aims to help conceptualize an analytics approach and strategically apply data to enhance student achievement.
This document discusses measuring the outcomes and impacts of extension programs. It begins by outlining the objectives of measuring outcomes and impacts, and discussing how to time evaluations using a logic model. It then describes 12 major evaluation techniques including surveys, case studies, interviews and observations. The document emphasizes the importance of demonstrating measurable results to win public support. It provides guidance on determining indicators and applying four types of evaluation: needs assessment, process evaluation, outcome evaluation, and impact evaluation. Throughout, it stresses applying a logic model approach and considering utility, feasibility, appropriateness and accuracy when designing program evaluations.
Assessing the impact of a global health MOOC/OER Sally Parsley
Presentation to OER Global 18, Delft, 24th April 2018 presenting an update on work so far and plans for assessing the impact of a global health MOOC/OER.
Digital Scholar Webinar: Getting to Outcomes: Supporting Implementation of Ev...SC CTSI at USC and CHLA
This document summarizes a lecture on the Getting To Outcomes (GTO) approach to supporting implementation of evidence-based programs. GTO is a 10-step model that guides practitioners through planning, implementation, and evaluation of programs. Early studies found GTO improved program performance and capacity but not always outcomes. Later randomized trials linked GTO support to better fidelity and some proximal youth outcomes. Overall, GTO shows that implementation support can improve how communities deliver evidence-based programs, though longer-term or more intensive support may be needed to reliably change outcomes.
Factors Effecting Motivation and Influencing Job Satisfaction in the UAE The HR Observer
Research findings have shown that job satisfaction of local employees in UAE can be increased by providing good compensation system like valid pay, recognition, promotional opportunity, meaningful work and good working environment. These factors have positive relationship with job satisfaction in the UAE. In this session, Particia will walk you through the 3 most imporatnt motivational items influcening job satisfaction in your company based on research conducuted with public and private companies in the UAE.
Dr. Patricia Prihodova, Assistant Professor, Canadian University of Dubai
Using empowerment evaluation to strengthen talent search progamming march 2011CHEARS
1. The presentation provided an overview of empowerment evaluation concepts and discussed how it can be used to strengthen Talent Search programs.
2. Empowerment evaluation aims to increase program success by providing stakeholders with tools for self-evaluation and mainstreaming evaluation into program planning and management.
3. Examples of how empowerment evaluation has been used included a new COE project testing models for Talent Search programs and a Next Generation GEAR UP evaluation approach.
Assessment Analytics - EUNIS 2015 E-Learning Task Force WorkshopLACE Project
This presentation is to introduce a discussion session at the 2015 EUNIS Congress workshop session of the E-Learning Task Force. The LACE Project is very briefly introduced, followed by an explanation of the presenter's view of learning analytics and a critique of some common themes. Assessment Analytics is presented as an antithesis to these themes and an assessment lifecycle model (used in the Jisc Electronic Management of Assessment Programme) is used to outline some ways in which assessment analytics can be realised, as stimulus for discussion.
This document introduces logic models and explains their purpose and usefulness. It defines the core components of a logic model, including resources/inputs, activities, outputs, outcomes, and impact. A logic model is a visual representation that shows the relationship between the resources invested in a program and the results or changes expected to occur. It provides a roadmap to help stakeholders understand how a program is intended to work. The document uses a hypothetical example of planning a family trip to demonstrate how to develop and read a basic logic model. Logic models are useful tools for program planning, implementation, evaluation, and improvement.
Using a standards alignment model as a framework for doctoral candidate asses...CPEDInitiative
This document outlines the process an institution took to redesign its doctoral program in alignment with CPED principles. It began with conducting a needs assessment and developing a theory of action linking program components to intended outcomes. Key aspects of the redesign included establishing program standards, designing authentic assessments like a scholarly practitioner portfolio and dissertation in practice, and using these assessments for continuous program improvement. The goal was to create a coherent program design that prepared students as scholarly practitioners who could apply research to solve problems of practice.
This document presents eight case studies of evaluations that were found to be highly cost-effective and useful to intended users. Each case study examines evaluations of development projects, programs, or policies from different regions and sectors. The case studies aim to address what impacts the evaluations contributed to, how findings and recommendations were used, how the evaluations' benefits can be attributed, how cost-effectiveness was measured, and lessons for future evaluation design. The document's goal is to provide examples of useful evaluations and lessons on increasing the likelihood that evaluations will be influential. A central message is that well-designed evaluations conducted in close consultation with users and at the right time can cost-effectively improve development interventions.
This presentation provides an overview of the Systematic Inquiry Cycle and Logic Modeling as tools for designing and developing a research study or project/program initiative.
1. The document describes a survey of anaesthetic trainees in Merseyside that found they collectively spent over 1000 hours on audits but only 16% resulted in recognizable practice changes.
2. It then outlines the formation of MAGIQ (Mersey Anaesthetic Group for Improving Quality) to help trainees collaborate on quality improvement projects and overcome barriers like lack of time, resources and support.
3. One such project was a Mersey-wide initiative to increase the use of pre-intubation checklists, which through rapid audit and feedback across 11 hospitals was able to increase checklist use from 51% to 87% over 8 weeks.
This document discusses using analytics to improve student success and outcomes. It provides an overview of learning analytics and predictive modeling concepts. Several components of an analytic model are described, including gathering data, predicting outcomes, taking action, monitoring results, and refining processes. Case studies of other institutions that have implemented analytic systems are presented. Managing expectations for analytic projects is also addressed, as results may not be immediate and adoption can be challenging. The goal is to use data-driven insights to help target support and resources to enhance student performance.
This document provides an overview of a webinar on gaining buy-in for quality improvement projects. It introduces the faculty members and discusses the community health center's founding and profile. The webinar covers strategies for engaging leadership, stakeholders, and team members in a project. It emphasizes understanding perspectives, conducting a stakeholder analysis to prioritize groups, and evaluating data to communicate the need for change. Tools discussed include developing communication plans and using a GRPI framework to assess project setup. The webinar aims to help participants effectively gain support and manage resistance when introducing changes.
Team Health Right Start presentation 27 February 2012byersd
The document outlines plans for the Team Health 'Right Start' initiative which aims to improve teamwork, communication and collaboration in healthcare. It discusses:
- Past projects that provided training to final year students and new graduates to build core skills. Evaluation found these improved work self-efficacy and attitudes towards collaboration.
- Future plans to develop training modules for new graduates over 2 years and roll out training to existing clinical teams, with a focus on high-risk clinical/communication issues.
- The training will be evaluated and refined as it is implemented more broadly across healthcare networks and facilities.
The document discusses a forum aimed at showcasing educational resources and facilitating networking from Team Health's 'Right Start' initiative to improve collaboration. It provides an overview of the 'Right Start' program which includes transition support for final year students, training for new graduates, and building high-performing clinical teams. Initial evaluations found the program improved participants' work self-efficacy and comfort with interprofessional collaboration. The organizers outlined next steps such as developing online modules and mapping training to national standards to expand the program.
This annotated compendium of evaluation planning guides can help you understand the basics of conducting an evaluation; learn how to create a logic model and indicators; understand evaluation terminology; develop performance management metrics; and evaluate your research, knowledge translation and commercialization activities, outputs and outcomes.
Issue 2: Effectiveness of Mentoring Program Practices.
This series was developed by MENTOR and translates the latest mentoring research into tangible strategies for mentoring practitioners. Research In Action (RIA) makes the best available research accessible and relevant to the mentoring field.
The faculty members need to know the process of planning the participate instruction in engineering courses. this PPT provides a set of guidelines in planning and delivering effective instructions.
What are we learning from learning analytics: Rhetoric to reality escalate 2014Shane Dawson
This document summarizes a talk about what we are learning from implementing learning analytics (LA) in higher education. It discusses the drivers for interest in LA, perspectives from industry and research, benchmarks of current LA adoption, and emerging models. While industry rhetoric portrays LA as providing easy answers, the reality is more complex. Most universities are still in early stages of basic reporting rather than advanced applications. For LA to meet its potential and have long term impact, a process-focused model is needed that builds organizational capacity, is adaptive, and takes a broad view of LA beyond just retention.
This document provides an overview and guidelines for conducting Organization Assessments (OAs) to inform the Canadian International Development Agency's (CIDA) decision making. OAs are intended to assess partner organizations' performance, capacity, operating environment, and motivation to improve performance. The guide outlines a common framework for planning, implementing, analyzing and reporting on OAs to systematically evaluate organizations' strengths and weaknesses and suitability for funding or partnership. It aims to help CIDA demonstrate accountability, responsible spending, and achievement of results when determining how to invest in development partnerships.
This document provides guidance on developing an innovation operating model. It discusses monitoring innovation project data and performance, consolidating innovation performance reports, and defining responsible roles. It also covers developing an innovation strategy, leadership principles, and objectives. Additional sections provide templates for presenting the minimum viable product of an innovation operating model and for pitching the model to obtain feedback and buy-in.
Assessing the impact of a global health MOOC/OER Sally Parsley
Presentation to OER Global 18, Delft, 24th April 2018 presenting an update on work so far and plans for assessing the impact of a global health MOOC/OER.
Digital Scholar Webinar: Getting to Outcomes: Supporting Implementation of Ev...SC CTSI at USC and CHLA
This document summarizes a lecture on the Getting To Outcomes (GTO) approach to supporting implementation of evidence-based programs. GTO is a 10-step model that guides practitioners through planning, implementation, and evaluation of programs. Early studies found GTO improved program performance and capacity but not always outcomes. Later randomized trials linked GTO support to better fidelity and some proximal youth outcomes. Overall, GTO shows that implementation support can improve how communities deliver evidence-based programs, though longer-term or more intensive support may be needed to reliably change outcomes.
Factors Effecting Motivation and Influencing Job Satisfaction in the UAE The HR Observer
Research findings have shown that job satisfaction of local employees in UAE can be increased by providing good compensation system like valid pay, recognition, promotional opportunity, meaningful work and good working environment. These factors have positive relationship with job satisfaction in the UAE. In this session, Particia will walk you through the 3 most imporatnt motivational items influcening job satisfaction in your company based on research conducuted with public and private companies in the UAE.
Dr. Patricia Prihodova, Assistant Professor, Canadian University of Dubai
Using empowerment evaluation to strengthen talent search progamming march 2011CHEARS
1. The presentation provided an overview of empowerment evaluation concepts and discussed how it can be used to strengthen Talent Search programs.
2. Empowerment evaluation aims to increase program success by providing stakeholders with tools for self-evaluation and mainstreaming evaluation into program planning and management.
3. Examples of how empowerment evaluation has been used included a new COE project testing models for Talent Search programs and a Next Generation GEAR UP evaluation approach.
Assessment Analytics - EUNIS 2015 E-Learning Task Force WorkshopLACE Project
This presentation is to introduce a discussion session at the 2015 EUNIS Congress workshop session of the E-Learning Task Force. The LACE Project is very briefly introduced, followed by an explanation of the presenter's view of learning analytics and a critique of some common themes. Assessment Analytics is presented as an antithesis to these themes and an assessment lifecycle model (used in the Jisc Electronic Management of Assessment Programme) is used to outline some ways in which assessment analytics can be realised, as stimulus for discussion.
This document introduces logic models and explains their purpose and usefulness. It defines the core components of a logic model, including resources/inputs, activities, outputs, outcomes, and impact. A logic model is a visual representation that shows the relationship between the resources invested in a program and the results or changes expected to occur. It provides a roadmap to help stakeholders understand how a program is intended to work. The document uses a hypothetical example of planning a family trip to demonstrate how to develop and read a basic logic model. Logic models are useful tools for program planning, implementation, evaluation, and improvement.
Using a standards alignment model as a framework for doctoral candidate asses...CPEDInitiative
This document outlines the process an institution took to redesign its doctoral program in alignment with CPED principles. It began with conducting a needs assessment and developing a theory of action linking program components to intended outcomes. Key aspects of the redesign included establishing program standards, designing authentic assessments like a scholarly practitioner portfolio and dissertation in practice, and using these assessments for continuous program improvement. The goal was to create a coherent program design that prepared students as scholarly practitioners who could apply research to solve problems of practice.
This document presents eight case studies of evaluations that were found to be highly cost-effective and useful to intended users. Each case study examines evaluations of development projects, programs, or policies from different regions and sectors. The case studies aim to address what impacts the evaluations contributed to, how findings and recommendations were used, how the evaluations' benefits can be attributed, how cost-effectiveness was measured, and lessons for future evaluation design. The document's goal is to provide examples of useful evaluations and lessons on increasing the likelihood that evaluations will be influential. A central message is that well-designed evaluations conducted in close consultation with users and at the right time can cost-effectively improve development interventions.
This presentation provides an overview of the Systematic Inquiry Cycle and Logic Modeling as tools for designing and developing a research study or project/program initiative.
1. The document describes a survey of anaesthetic trainees in Merseyside that found they collectively spent over 1000 hours on audits but only 16% resulted in recognizable practice changes.
2. It then outlines the formation of MAGIQ (Mersey Anaesthetic Group for Improving Quality) to help trainees collaborate on quality improvement projects and overcome barriers like lack of time, resources and support.
3. One such project was a Mersey-wide initiative to increase the use of pre-intubation checklists, which through rapid audit and feedback across 11 hospitals was able to increase checklist use from 51% to 87% over 8 weeks.
This document discusses using analytics to improve student success and outcomes. It provides an overview of learning analytics and predictive modeling concepts. Several components of an analytic model are described, including gathering data, predicting outcomes, taking action, monitoring results, and refining processes. Case studies of other institutions that have implemented analytic systems are presented. Managing expectations for analytic projects is also addressed, as results may not be immediate and adoption can be challenging. The goal is to use data-driven insights to help target support and resources to enhance student performance.
This document provides an overview of a webinar on gaining buy-in for quality improvement projects. It introduces the faculty members and discusses the community health center's founding and profile. The webinar covers strategies for engaging leadership, stakeholders, and team members in a project. It emphasizes understanding perspectives, conducting a stakeholder analysis to prioritize groups, and evaluating data to communicate the need for change. Tools discussed include developing communication plans and using a GRPI framework to assess project setup. The webinar aims to help participants effectively gain support and manage resistance when introducing changes.
Team Health Right Start presentation 27 February 2012byersd
The document outlines plans for the Team Health 'Right Start' initiative which aims to improve teamwork, communication and collaboration in healthcare. It discusses:
- Past projects that provided training to final year students and new graduates to build core skills. Evaluation found these improved work self-efficacy and attitudes towards collaboration.
- Future plans to develop training modules for new graduates over 2 years and roll out training to existing clinical teams, with a focus on high-risk clinical/communication issues.
- The training will be evaluated and refined as it is implemented more broadly across healthcare networks and facilities.
The document discusses a forum aimed at showcasing educational resources and facilitating networking from Team Health's 'Right Start' initiative to improve collaboration. It provides an overview of the 'Right Start' program which includes transition support for final year students, training for new graduates, and building high-performing clinical teams. Initial evaluations found the program improved participants' work self-efficacy and comfort with interprofessional collaboration. The organizers outlined next steps such as developing online modules and mapping training to national standards to expand the program.
This annotated compendium of evaluation planning guides can help you understand the basics of conducting an evaluation; learn how to create a logic model and indicators; understand evaluation terminology; develop performance management metrics; and evaluate your research, knowledge translation and commercialization activities, outputs and outcomes.
Issue 2: Effectiveness of Mentoring Program Practices.
This series was developed by MENTOR and translates the latest mentoring research into tangible strategies for mentoring practitioners. Research In Action (RIA) makes the best available research accessible and relevant to the mentoring field.
The faculty members need to know the process of planning the participate instruction in engineering courses. this PPT provides a set of guidelines in planning and delivering effective instructions.
What are we learning from learning analytics: Rhetoric to reality escalate 2014Shane Dawson
This document summarizes a talk about what we are learning from implementing learning analytics (LA) in higher education. It discusses the drivers for interest in LA, perspectives from industry and research, benchmarks of current LA adoption, and emerging models. While industry rhetoric portrays LA as providing easy answers, the reality is more complex. Most universities are still in early stages of basic reporting rather than advanced applications. For LA to meet its potential and have long term impact, a process-focused model is needed that builds organizational capacity, is adaptive, and takes a broad view of LA beyond just retention.
This document provides an overview and guidelines for conducting Organization Assessments (OAs) to inform the Canadian International Development Agency's (CIDA) decision making. OAs are intended to assess partner organizations' performance, capacity, operating environment, and motivation to improve performance. The guide outlines a common framework for planning, implementing, analyzing and reporting on OAs to systematically evaluate organizations' strengths and weaknesses and suitability for funding or partnership. It aims to help CIDA demonstrate accountability, responsible spending, and achievement of results when determining how to invest in development partnerships.
This document provides guidance on developing an innovation operating model. It discusses monitoring innovation project data and performance, consolidating innovation performance reports, and defining responsible roles. It also covers developing an innovation strategy, leadership principles, and objectives. Additional sections provide templates for presenting the minimum viable product of an innovation operating model and for pitching the model to obtain feedback and buy-in.
The document discusses plans to establish a Certified Information Quality Professional (CIQP) certification. It will be developed through a partnership between IAIDQ and CASTLE Worldwide. The certification aims to advance the information quality profession by establishing standards and validating practitioners' skills and knowledge. It will benefit both individuals and organizations by helping ensure practitioners can deliver quality improvements. The certification process will involve job analysis, validation surveys, and development of a rigorous exam based on industry standards.
The document discusses strategies for developing a successful career in auditing. It recommends taking a long-term view of career development through rotational programs, mentoring, and continuing education. It also emphasizes the importance of soft skills like collaboration and networking, and suggests attending professional development events to expand one's skills and network. The talent shortage in auditing means companies must focus on retention through career growth opportunities and an engaging corporate culture.
The document discusses strategies for developing a successful career in auditing. It recommends taking a long-term view of career development through rotational programs, mentoring, and continuing education. It also emphasizes the importance of soft skills like collaboration and networking, and suggests attending professional development events to expand one's skills and network. The talent shortage in auditing means companies must focus on retention through career growth opportunities and an engaging corporate culture.
The document discusses strategies for developing a successful career in auditing. It recommends taking a long-term view of career development through rotational programs, mentoring, and continuing education. It also emphasizes the importance of soft skills like collaboration and networking, and suggests attending professional development events to expand one's skills and network. The talent shortage in auditing means companies must focus on retention through career growth opportunities and an engaging corporate culture.
http://www.qualtrics.com/employee-engagement/
Bryce Winkelman talks about employee engagement at the Talent Summit (Dublin Mansion House, March 4th).
Qualtrics 360: Sophisticated employee development made simple
This document provides an overview and copyright information for the book "Best Practices in Talent Management: How the World's Leading Corporations Manage, Develop, and Retain Top Talent". The book contains 14 case studies of large companies and their talent management programs and initiatives. It discusses topics such as leadership development, high-potential programs, succession planning, and retaining top employees. The case studies provide details on the design, implementation, and evaluation of the various companies' talent management strategies.
This document provides an overview and copyright information for the book "Best Practices in Talent Management: How the World's Leading Corporations Manage, Develop, and Retain Top Talent". The book contains 14 case studies of large companies and their talent management programs and initiatives. It discusses topics such as leadership development, high-potential programs, succession planning, and retaining top employees. The case studies provide details on the design, implementation, and evaluation of the various companies' talent management strategies.
This document provides information about assessment and recruitment solutions from two companies: Quo Vadis Consultants and Antal International Network. It outlines their services such as behavioral assessments, recruitment, and testing to help clients identify talent, remove costs, and enhance their business. Contact information is provided for the managing director and both companies' websites and addresses are listed.
This document is Phillip Fuller's portfolio demonstrating achievement of institutional outcomes through successful completion of a Business Quality Management degree from Southwestern College. It outlines artifacts and reflections for each institutional outcome area, including critical thinking, ethical reasoning, leadership, communication, and career preparation. For critical thinking, Fuller reflects on how courses helped him look at situations from different perspectives and consider causes and effects before making decisions. He discusses using tools like flow charts to support critical thinking.
iEDGE is now the exclusive reseller and distributor of assessments from Assessments 24x7 in Pakistan. Assessments 24x7 is a global leader in assessment technology that has conducted over 7 million assessments. They offer a wide range of assessment types to evaluate things like personality, leadership skills, sales aptitude, and more. iEDGE is authorized to market, distribute, and provide support for all of Assessments 24x7's products and platforms to clients in Pakistan.
Unleashing the Power of Analytics: Driving Performance at the Intersection of...Human Capital Media
Performance analytics is a data-driven approach used to optimize how organizations manage and deploy learning and development that supports execution of business strategy. How are learning organizations today using data? How do they define and determine relevant analytics? Is there a critical shortage of analytics skills in the learning function? What outcomes are organizations realizing from their efforts when deploying performance analytics to drive learning and development (L&D)? Finally, how do we link learning analytics and business analytics to produce performance data that really deliver results?
In this session, we will share the results of a proprietary research conducted with hundreds of CLOs , in partnership with Human Capital Media, that delved into these questions… and more. We will shed light on best practices being employed and the challenges for learning leaders.
Learning Objectives:
Identify best practices and gaps in the adoption of analytics by learning organizations
Understand the difference and impact of learning analytics vs Performance analytics
Examine how analytics can transform strategy and outcomes for the learning function
The document outlines the development of human resource management standards in South Africa by the South African Board for People Practices (SABPP). It provides examples of draft standards for strategic HR management, talent management, HR risk management, and HR measurement. For each standard, it defines the topic, lists objectives, and provides implementation guidelines. The overall document discusses the importance of establishing national HR standards to improve HR practices and professionalize the field in South Africa. It notes the wide support received from across South Africa and other countries for the HR standards project.
Insights Success is the Best Business Magazine in the world for enterprises, being a platform it focuses distinctively on emerging as well as leading fastest growing companies, their confrontational style of doing business and way of delivering effective and collaborative solutions to strengthen market share. Here, we talks about leader’s viewpoints & ideas, latest products/services, etc. Insights Success magazine reaches out to all the ‘C’ Level professional, VPs, Consultants, VCs, Managers, and HRs of various industries.
This document provides an overview of the South African Board for People Practices (SABPP) and their work developing national HR standards for South Africa. It begins with welcoming remarks and introducing SABPP's professional values. It then outlines SABPP's value proposition through providing professional recognition, resources, and research to advance the HR profession. The document discusses various HR standards developed by SABPP, including standards for strategic HR management, talent management, HR risk management, and HR measurement. It provides definitions, objectives and implementation guidelines for each standard. The document emphasizes the importance of standards for consistency, continuous improvement, and managing risk in HR practices.
The document summarizes best practices for building effective eLearning programs for nurse educators presented by Richard Close and Kenneth Dion. Some of the best practices discussed include providing the right information to nurses at the right time, using open licensing to allow flexible content development, tracking job competencies in a learning management system, and ensuring projects are planned and tested effectively. The presentation also covered the importance of reporting and integrating eLearning with other systems like human resources and finance.
Similar to Astd handbook for measuring and evaluating training (20)
Social Network Analysis as a tool evaluate quality improvement in healthcareDaniel McLinden
This evaluation examines the relationships among members of a Quality Improvement (QI) initiative intended to improve the care and management of children with Asthma in the local community. QI methods typically involve teams of people working across multiple professional and organizational boundaries to enact change in a complex system of care. While the overall success of the initiative will be evaluated based on the improvement in health outcomes, a more immediate evaluation need was to assess a pathway to future success, namely, the connections between individuals involved in the initiative. Connections were important because the standard work responsibilities of individuals tends to occurs within existing work relationships while the QI work requires that individuals form new working relationships. Social network analysis is a methodology that visualizes and quantifies the relationships in networks of people and was used here to evaluate the connectedness between people. Individuals involved in the initiative indicated the extent to which they interacted with every other member of the initiative prior to beginning work on the initiative and presently. The connections between people were evaluated using a density measure and were found to have significantly increased for several QI teams. The role of persons in the network was evaluated using several centrality measures and was found to remain essentially stable. Implications for evaluation and the improvement of improvement initiatives are discussed.
And Then the Internet Happened Prospective Thoughts about Concept Mapping in ...Daniel McLinden
In this millennium the worldwide web has enabled new models of collaboration and the power of networks to emerge. In the second decade of the new millennium these ideas continue to spread. Cross-disciplinary teams, open innovation and social networks represent radically different approaches to working in systems to create knowledge, share information and develop interventions. Think Wikipedia. Methods for program planning and evaluation need to keep pace with these changes and concept mapping methodology may have been ahead of its time as a method that resonates with 21st century complexity. To think prospectively, this session will reframe concept mapping as a method that employs open innovation and networks to create meaning about complex phenomena. With this basis, this session will explore through presentation and discussion the future possibilities for the types of problems that can be addressed and ways to co-create meaning with diverse stakeholders.
Making meaning in collaboration with communities: Co-creation through concept...Daniel McLinden
Community-Based Participatory Research is a collaborative approach to research that engages community members alongside researchers in all stages of the research process including shared decision making on community-prioritized research initiatives. The inclusion of the community ensures that research and interventions are relevant to the lived experience and daily lives of those most affected. Including the multiple and diverse perspectives of all partners in the research process is a methodological challenge. In work to identify strategies to address HIV in a local Black faith community, Concept Mapping was used to address the methodological challenge of inclusion. This session will discuss the process of creating maps with the community and discuss how the results will be used to guide future program development. In particular we will present the rationale for and discuss the challenges of using CM methodology in the CBPR framework.
This document summarizes a presentation on using virtual reality for medical education and training. The presentation discusses how virtual reality can be used to simulate patient interactions and train residents on discussing vaccine hesitancy with parents. It describes a study where residents interacted with virtual avatars of parents in a simulated exam room. Residents reported finding the experience immersive and effective. The training resulted in lower vaccine refusal rates. The presentation concludes by thanking the individuals and departments involved in developing the virtual reality training program.
What is an idea network (aka concept map) and how and why to map a network of...Daniel McLinden
Idea Network Analysis is a method for crowdsourcing the understanding of and the design of interventions for any number of complex challenges. Unlike typical group processes which rely on achieving consensus; this process allows the many and diverse viewpoints on an issue to emerge. The methodology relies on brainstorming, sorting, and rating to collect input from the group. Input is web enabled so that participants respond at a time and place convenient for the participant; time zones and geography are not barriers to participation. Multivariate statistical analyses create a visual representation, a map, of the group's thinking. The group can interpret the map, discuss the mean and plan actions based on their collective wisdom. The maps are visual so knowledge of the underlying mathematics is not required.
Software and other resources at: http://dmclinden.github.io/
This document summarizes a learning exchange on community-enabled technology for healthcare. It discusses how (1) people can pool their knowledge and resources through technology, (2) the Improving Care Now network aims to improve care for pediatric inflammatory bowel disease, and (3) their solution is a learning system called ICNexchange.org that allows a dispersed community to share knowledge, tools, and learn from each other to improve outcomes for patients.
Handbook of Methodological Approaches to Community-Based ResearchDaniel McLinden
This handbook focuses on community-based research methodologies. It presents quantitative, qualitative, and mixed methods approaches that are relevant for exploring individuals and groups within their communities. The handbook provides examples of how to implement and evaluate interventions at the community level using these different methodological approaches. It also illustrates how community researchers can work together with methodologists to better understand social problems and enable positive change processes within communities.
Techknowledge 2015 fr203 mclinden davis postedDaniel McLinden
Collaborative online communities have developed a variety of products and services often relying on voluntary labor well known examples include a computer operating system, a web browser, an online encyclopedia, and even the design of t-shirts. Yochai Benkler wrote in The Penguin and the Leviathan, "once you open the possibility that people are not only using the web as a platform … but to pool their efforts, knowledge, and resources … the possibilities for what they can create are astounding." Taking a cue from Benkler, our aim was to create a learning resource that radically improves the treatment and management of a chronic disease in children by making it possible for a geographically dispersed community of clinicians, scientists, patients, and parents to pool their knowledge and learn from each other. We will share what we have learned about the high tech of the building a web space, as well as, our approach to design relying on personas and scenarios, organizing learning resources through taxonomy and folksonomy, and engaging the community through co-design. While this social learning intervention is directed toward a medical need, the principles of this approach are applicable in any organization that wishes to capitalize on the web as a platform to enable the creation of user-generated content and to create a community that learns by sharing what they know.
McLinden aea applying Social network analysis to analyze the network of ideas...Daniel McLinden
: Concept Mapping is a method that creates a visual representation that illustrates the thoughts, ideas, or planned actions that arise from a group of stakeholders on a particular issue. Social Network Analysis is a method that likewise creates a visual representation of data; a network map typically represents people and the connections, or lack thereof, between these people. While the goals of these two methods differ, the underlying data structures are similar; a network of relationships between data elements. Social network analysis is explored here as a supplement to concept mapping. A secondary analysis of a concept map was conducted using social network analysis. The methods and the implications for supplementing the analysis of concept maps and debriefing results with stakeholders are discussed.
The ImproveCareNow Network created the ImproveCareNow Exchange to be a learning resource for implementing effective chronic disease management systems like for pediatric inflammatory bowel disease. The Exchange acts as a place for the growing network of 63 centers to share tools, processes, stories and lessons learned. It was designed like Pinterest to allow clinicians, scientists, patients and parents to pool their knowledge and improve outcomes. The site design and functionality continues to be refined based on user feedback and metrics to best meet the needs of the network.
And Then the Internet Happened Prospective Thoughts about Concept Mapping in ...Daniel McLinden
In this millennium the worldwide web has enabled new models of collaboration and the power of networks to emerge. In the second decade of the new millennium these ideas continue to spread. Cross-disciplinary teams, open innovation and social networks represent radically different approaches to working in systems to create knowledge, share information and develop interventions. Think Wikipedia. Methods for program planning and evaluation need to keep pace with these changes and concept mapping methodology may have been ahead of its time as a method that resonates with 21st century complexity. To think prospectively, reframe concept mapping as a method that employs open innovation and networks to create meaning about complex phenomena. With this basis, the future possibilities for the types of problems that can be addressed and ways to co-create meaning with diverse stakeholders can be explored.
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Data and AI
Round table discussion of vector databases, unstructured data, ai, big data, real-time, robots and Milvus.
A lively discussion with NJ Gen AI Meetup Lead, Prasad and Procure.FYI's Co-Found
4th Modern Marketing Reckoner by MMA Global India & Group M: 60+ experts on W...Social Samosa
The Modern Marketing Reckoner (MMR) is a comprehensive resource packed with POVs from 60+ industry leaders on how AI is transforming the 4 key pillars of marketing – product, place, price and promotions.
STATATHON: Unleashing the Power of Statistics in a 48-Hour Knowledge Extravag...sameer shah
"Join us for STATATHON, a dynamic 2-day event dedicated to exploring statistical knowledge and its real-world applications. From theory to practice, participants engage in intensive learning sessions, workshops, and challenges, fostering a deeper understanding of statistical methodologies and their significance in various fields."
End-to-end pipeline agility - Berlin Buzzwords 2024Lars Albertsson
We describe how we achieve high change agility in data engineering by eliminating the fear of breaking downstream data pipelines through end-to-end pipeline testing, and by using schema metaprogramming to safely eliminate boilerplate involved in changes that affect whole pipelines.
A quick poll on agility in changing pipelines from end to end indicated a huge span in capabilities. For the question "How long time does it take for all downstream pipelines to be adapted to an upstream change," the median response was 6 months, but some respondents could do it in less than a day. When quantitative data engineering differences between the best and worst are measured, the span is often 100x-1000x, sometimes even more.
A long time ago, we suffered at Spotify from fear of changing pipelines due to not knowing what the impact might be downstream. We made plans for a technical solution to test pipelines end-to-end to mitigate that fear, but the effort failed for cultural reasons. We eventually solved this challenge, but in a different context. In this presentation we will describe how we test full pipelines effectively by manipulating workflow orchestration, which enables us to make changes in pipelines without fear of breaking downstream.
Making schema changes that affect many jobs also involves a lot of toil and boilerplate. Using schema-on-read mitigates some of it, but has drawbacks since it makes it more difficult to detect errors early. We will describe how we have rejected this tradeoff by applying schema metaprogramming, eliminating boilerplate but keeping the protection of static typing, thereby further improving agility to quickly modify data pipelines without fear.
Codeless Generative AI Pipelines
(GenAI with Milvus)
https://ml.dssconf.pl/user.html#!/lecture/DSSML24-041a/rate
Discover the potential of real-time streaming in the context of GenAI as we delve into the intricacies of Apache NiFi and its capabilities. Learn how this tool can significantly simplify the data engineering workflow for GenAI applications, allowing you to focus on the creative aspects rather than the technical complexities. I will guide you through practical examples and use cases, showing the impact of automation on prompt building. From data ingestion to transformation and delivery, witness how Apache NiFi streamlines the entire pipeline, ensuring a smooth and hassle-free experience.
Timothy Spann
https://www.youtube.com/@FLaNK-Stack
https://medium.com/@tspann
https://www.datainmotion.dev/
milvus, unstructured data, vector database, zilliz, cloud, vectors, python, deep learning, generative ai, genai, nifi, kafka, flink, streaming, iot, edge
The Ipsos - AI - Monitor 2024 Report.pdfSocial Samosa
According to Ipsos AI Monitor's 2024 report, 65% Indians said that products and services using AI have profoundly changed their daily life in the past 3-5 years.
State of Artificial intelligence Report 2023kuntobimo2016
Artificial intelligence (AI) is a multidisciplinary field of science and engineering whose goal is to create intelligent machines.
We believe that AI will be a force multiplier on technological progress in our increasingly digital, data-driven world. This is because everything around us today, ranging from culture to consumer products, is a product of intelligence.
The State of AI Report is now in its sixth year. Consider this report as a compilation of the most interesting things we’ve seen with a goal of triggering an informed conversation about the state of AI and its implication for the future.
We consider the following key dimensions in our report:
Research: Technology breakthroughs and their capabilities.
Industry: Areas of commercial application for AI and its business impact.
Politics: Regulation of AI, its economic implications and the evolving geopolitics of AI.
Safety: Identifying and mitigating catastrophic risks that highly-capable future AI systems could pose to us.
Predictions: What we believe will happen in the next 12 months and a 2022 performance review to keep us honest.
Analysis insight about a Flyball dog competition team's performanceroli9797
Insight of my analysis about a Flyball dog competition team's last year performance. Find more: https://github.com/rolandnagy-ds/flyball_race_analysis/tree/main
Beyond the Basics of A/B Tests: Highly Innovative Experimentation Tactics You...Aggregage
This webinar will explore cutting-edge, less familiar but powerful experimentation methodologies which address well-known limitations of standard A/B Testing. Designed for data and product leaders, this session aims to inspire the embrace of innovative approaches and provide insights into the frontiers of experimentation!
4. iii
Contents
Foreword................................................................................................................................ vii
Introduction.
.............................................................................................................................xi
Acknowledgments..................................................................................................................xxi
Section I: Evaluation Planning................................................................................ 1
1 Identifying Stakeholder Needs................................................................................... 3
Lizette Zuniga
2 Developing Powerful Program Objectives............................................................. 15
Heather M. Annulis and Cyndi H. Gaudet
3 Planning Your Evaluation Project............................................................................ 29
Donald J. Ford
Section II: Data Collection..................................................................................... 53
4 Using Surveys and Questionnaires........................................................................... 55
Caroline Hubble
5 Designing a Criterion-Referenced Test.................................................................. 73
Sharon A. Shrock and William C. Coscarelli
6 Conducting Interviews............................................................................................... 85
Anne F. Marrelli
7 Conducting Focus Groups......................................................................................... 97
Lisa Ann Edwards
8 Action Planning as a Performance Measurement and Transfer Strategy...... 107
Holly Burkett
9 The Success Case Method: Using Evaluation to Improve Training
Value and Impact....................................................................................................... 125
Robert O. Brinkerhoff and Timothy P. Mooney
10 Using Performance Records.................................................................................... 135
Ronald H. Manalu
h
5. Section III: Data Analysis..................................................................................... 147
11 Using Statistics in Evaluation.................................................................................. 149
George R. Mussoline
12 Analyzing Qualitative Data...................................................................................... 165
Keenan (Kenni) Crane
13 Isolating the Effects of the Program..................................................................... 173
Bruce C. Aaron
14 Converting Measures to Monetary Value............................................................. 189
Patricia Pulliam Phillips
15 Identifying Program Costs....................................................................................... 201
Judith F. Cardenas
16 Calculating the Return-on-Investment................................................................. 213
Patricia Pulliam Phillips
17 Estimating the Future Value of Training Investments...................................... 223
Daniel McLinden
Section IV: Measurement and Evaluation at Work........................................... 237
18 Reporting Evaluation Results................................................................................. 239
Tom Broslawsky
19 Giving CEOs the Data They Want......................................................................... 253
Jack J. Phillips
20 Using Evaluation Results.
......................................................................................... 265
James D. Kirkpatrick and Wendy Kayser Kirkpatrick
21 Implementing and Sustaining a Measurement and Evaluation Practice.
...... 283
Debi Wallace
22 Selecting Technology to Support Evaluation....................................................... 295
Kirk Smith
23 Evaluating mLearning.............................................................................................. 307
Cathy A. Stawarski and Robert Gadd
24 Evaluating Leadership Development.
................................................................... 321
Emily Hoole and Jennifer Martineau
25 Evaluating a Global Sales Training Program....................................................... 337
Frank C. Schirmer
26 Evaluating Technical Training................................................................................ 355
Toni Hodges DeTuncq
6. v
27 Evaluating Traditional Training versus Computer Simulation Training
for Leader Development.
......................................................................................... 373
Alice C. Stewart and Jacqueline A. Williams
Section V: Voices.................................................................................................. 385
Robert O. Brinkerhoff.............................................................................................. 387
Mary L. Broad............................................................................................................ 392
Jac Fitz-enz.
................................................................................................................. 396
Roger Kaufman.......................................................................................................... 400
Donald L. Kirkpatrick.............................................................................................. 404
Jack J. Phillips............................................................................................................. 409
Dana Gaines Robinson............................................................................................. 415
William J. Rothwell.................................................................................................... 420
Epilogue....................................................................................................................... 427
About the Editor of “Voices”................................................................................... 429
Appendix: Answers to Exercises.......................................................................................... 431
About the Editor.................................................................................................................. 455
Index..................................................................................................................................... 457
8. vii
Foreword
The term evaluation invokes a variety of emotions in people. Some people fear the
thought of being held accountable through an evaluation process; others see the pros-
pect of evaluation as challenging and motivating. In either case, measurement and evalua-
tion of training has a place among the critical issues in the learning and development field.
It is a concept that is here to stay and a competency all learning professionals should pursue
and embrace.
The Measurement and Evaluation Dilemma
The dilemma surrounding the evaluation of training is a source of frustration for many
executives. Most executives realize that learning is an organizational necessity. Intuitively
we know that providing learning opportunities is valuable to the organization and to em-
ployees, as individuals. However, the frustration sets in when there is a lack of evidence to
show programs really work. Measurement and evaluation represent the most promising way
to account for learning investments and to position learning as a catalyst for organization
success. Yet, many organizations are still hesitant to pursue a comprehensive measurement
strategy, primarily because they lack the answers to questions such as
n How can we move up the evaluation chain?
n How can we collect data efficiently?
n What data should we collect?
n How can we design a practical evaluation strategy that has credibility with all
stakeholders?
n What tools, resources, and technologies are available to us?
n How do we ensure we select the right tools?
n How can we ensure we have the right support throughout the organization?
n How can we integrate data in the management scorecard?
n How do we use evaluation data?
h
9. Foreword
viii
Unanswered questions like these prohibit well-meaning learning professionals and their
executives from creating a sound measurement strategy. Thus, they hold themselves back
from the benefits of a growing trend in workplace learning.
Measurement and Evaluation Trends
One has only to look at the latest conference agenda to see that the evaluation trend
continues. It is not a fad, but a growing topic of continued discussion and debate.
Throughout the world, organizations are addressing the measurement and evaluation
issue by
n increasing their investment in measurement and evaluation
n moving up the evaluation chain beyond the typical reaction and learning data
n increasing the focus of evaluation based on stakeholder needs
n making evaluation an integral part of the design, development, delivery, and imple-
mentation of programs rather than an add-on activity
n shifting from a reactive approach to evaluation to a proactive approach
n enhancing the measurement and evaluation process through the use of technology
n planning for evaluation at the outset of program development and design
n emphasizing the initial needs analysis process
n calculating the return-on-investment for costly, high-profile, and comprehensive
programs
n using evaluation data to improve programs and processes.
As these tendencies continue, so do the opportunities for the learning function.
Opportunities for Learning and Development
Leaders of the learning function are challenged by the changing landscape of our indus-
try. Our roles have evolved considerably in the last half-century. In the past, learning lead-
ers were fundamentally charged with ensuring the acquisition of job-related skills; then
the role expanded to include developmental efforts including leadership development,
management development, and executive development. Today our role takes on broader
and more strategic responsibilities.
As the role of learning leaders changes, so does the relationship of the learning and develop-
ment function to the organization. This requires that learning leaders and fellow organiza-
tion leaders together view and influence others to see learning and development not as an
10. Foreword
ix
add-on activity, but as systemic—a critical, integral part of the organization. To be success-
ful in this role, we must embrace the opportunities that measurement and evaluation offer,
including
n tools to align programs with the business
n data collection methods that can be integrated into our programs
n data analysis procedures that ensure we tell our success stories in terms that reso-
nate with all stakeholders including senior leaders
n information that can influence decisions being made about our function
n tools to help us show value for investments made in our programs
n information that can help us improve our programs, ensuring the right people are
involved for the right reasons.
These along with many other opportunities await us if we are willing to do what it takes
to develop the proficiency and the wherewithal to make training measurement and evalu-
ation work.
Call to Action
As a leader of a learning and development function, I challenge all learning professionals
and their leaders to take on measurement and evaluation with fervor. No other step in the
human performance improvement process provides the mechanism by which we can influ-
ence programs and perceptions as does measurement and evaluation. We know that train-
ing, development, and performance improvement programs are a necessity to sustain and
grow our organizations. But we also know that activity without results is futile. There is no
other way to ensure our programs drive results than to apply the measurement and evalua-
tion concepts presented in this publication and others available through organizations such
as the American Society for Training & Development. Take on this challenge with baby
steps if you must, giant leaps if you dare. But do it! Measuring and evaluating training can
be fun and enlightening if we squelch the fears and embrace the opportunities.
Pat Crull, PhD
Vice president and chief learning officer, Time Warner Cable
Former chair, ASTD Board of Directors
May 2010
12. xi
Introduction to the
ASTD Handbook of Measuring
and Evaluating Training
Learning professionals around the world have a love-hate relationship with measure-
ment and evaluation. On the one hand, they agree that good measurement and evalu-
ation practices can provide useful data; on the other hand, they feel that measurement and
evaluation take time and resources. However, no one argues that the need for across-the-
board accountability is on the rise. This is especially true with training and development.
With this demand comes the need for resources to support learning professionals in their
quest to build capacity in measurement and evaluation. The ASTD Handbook of Measuring
and Evaluating Training and complementary resources are an effort to support learning
professionals in this quest.
Measurement and Evaluation: The Challenges and the Benefits
At the most fundamental level, evaluation includes all efforts to place value on events,
things, processes, or people (Rossi, Freeman, and Lipsey, 1999). Data are collected and
converted into information for measuring the effects of a program. The results help in
decision making, program improvement, and in determining the quality of a program
(Basarab and Root, 1992).
For decades experts in training evaluation have argued the need for measurement and
evaluation. Many organizations have heeded this cry and have applied processes that in-
clude quantitative, qualitative, financial, and nonfinancial data. Training functions taking
a proactive approach to measurement and evaluation have survived organizational and
economic upheaval. Despite the call, however, many training managers and professionals
ignore the need for accountability, only to find themselves wondering why the chief finan-
cial officer is now taking over training and development. So why is it that many training
functions have failed to embrace this critical step in the human performance improve-
ment process?
h
13. Introduction
xii
Measurement and Evaluation Challenges
Barriers to embracing measurement and evaluation can be boiled down to 12 basic
challenges.
1. Too Many Theories and Models
Since Kirkpatrick provided his four levels of evaluation in the late 1950s, dozens of evalu-
ation books have been written just for the training community. Add to this the dozens of
evaluation books written primarily for the social sciences, education, and government orga-
nizations. Then add the 25-plus models and theories for evaluation offered to practitioners
to help them measure the contribution of training, each claiming a unique approach and
a promise to address evaluation woes and bring about world peace. It’s no wonder there is
confusion and hesitation when it comes to measurement and evaluation.
2. Models Are Too Complex
Evaluation can be a difficult issue. Because situations and organizations are different, im-
plementing an evaluation process across multiple programs and organizations is complex.
The challenge is to develop models that are theoretically sound, yet simple and usable.
3. Lack of Understanding of Evaluation
It hasn’t always been easy for training professionals to learn this process. Some books
on the topic have more than 600 pages, making it impossible for a practitioner to absorb
just through reading. Not only is it essential for the evaluator to understand evaluation
processes, but also the entire training staff must learn parts of the process and understand
how it fits into their role. To remedy this situation, it is essential for the organization to
focus on how expertise is developed and disseminated within the organization.
4. The Search for Statistical Precision
The use of complicated statistical models is confusing and difficult to absorb for many prac-
titioners. Statistical precision is needed when high-stakes decisions are being made and
when plenty of time and resources are available. Otherwise, very simple statistics are ap-
propriate.
5. Evaluation Is Considered a Postprogram Activity
Because our instructional systems design models tend to position evaluation at the end,
it loses the power to deliver the needed results. The most appropriate way to use evalu-
ation is to consider it early—before program development—at the time of conception.
With this simple shift in mindset, evaluations are conducted systematically rather than
reactively.
14. Introduction
xiii
6. Failure to See the Long-Term Payoff of Evaluation
Understanding the long-term payoff of evaluation requires examining multiple rationales
for pursuing evaluation. Evaluation can be used to
n determine success in accomplishing program objectives
n prioritize training resources
n enhance training accountability
n identify the strengths and weaknesses of the training process
n compare the costs to the benefits of a training program
n decide who should participate in future training programs
n test the clarity and validity of tests, cases, and exercises
n identify which participants were the most successful in the training program
n reinforce major points made to the participant
n improve the training quality
n assist in marketing future programs
n determine if the program was the appropriate solution for the specific need
n establish a database that can assist management in making decisions.
7. Lack of Support from Key Stakeholders
Important stakeholders who need and use evaluation data sometimes don’t provide the sup-
port needed to make the process successful. Specific steps must be taken to win support
and secure buy-in from key groups, including senior executives and the management team.
Executives must see that evaluation produces valuable data to improve programs and validate
results. When the stakeholders understand what’s involved, they may offer more support.
8. Evaluation Has Not Delivered the Data Senior Managers Want
Today, senior executives no longer accept reaction and learning data as the final say in
program contribution. Senior executives need data on the application of new skills on the
job and the corresponding impact in the business units. Sometimes they want return-on-
investment (ROI) data for major programs. A recent study shows that the number one data
point to senior executives responding to the survey (N=96) is impact data; the number two
data point is ROI (Phillips and Phillips, 2010).
9. Improper Use of Evaluation Data
Improper use of evaluation data can lead to four major problems:
n Too many organizations do not use evaluation data at all. Data are collected,
tabulated, catalogued, filed, and never used by any particular group other than the
individual who initially collected the data.
15. Introduction
xiv
n Data are not provided to the appropriate audiences. Analyzing the target audi-
ences and determining the specific data needed for each group are important steps
when communicating results.
n Data are not used to drive improvement. If not part of the feedback cycle, evalua-
tion falls short of what it is intended to accomplish.
n Data are used for the wrong reasons—to take action against an individual or group
or to withhold funds rather than improving processes. Sometimes the data are used
in political ways to gain power or advantage over another person.
10. Lack of Consistency
For evaluation to add value and be accepted by different stakeholders, it must be consistent
in its approach and methodology. Tools and templates need to be developed to support the
method of choice to prevent perpetual reinvention of the wheel. Without this consistency,
evaluation consumes too many resources and raises too many concerns about the quality
and credibility of the process.
11. A Lack of Standards
Closely paralleled with consistency is the issue of standards. Standards are rules for making
evaluation consistent, stable, and equitable. Without standards there is little credibility in
processes and stability of outcomes.
12. Sustainability
A new model or approach with little theoretical grounding often has a short life. Evalua-
tion must be theoretically sound and integrated into the organization so that it becomes
routine and sustainable. To accomplish this, the evaluation process must gain respect of key
stakeholders at the outset. Without sustainability, evaluation will be on a roller-coaster ride,
where data are collected only when programs are in trouble and less attention is provided
when they are not.
Despite these challenges, there are many benefits to implementing comprehensive mea-
surement and evaluation practices.
Measurement and Evaluation Benefits
Organizations embracing measurement and evaluation take on the challenges and reap the
benefits. When the training function uses evaluation to its fullest potential, the benefits
grow exponentially. Some of the benefits of training measurement and evaluation include
n providing needed responses to senior executives
n justifying budgets
16. Introduction
xv
n improving program design
n identifying and improving dysfunctional processes
n enhancing the transfer of learning
n eliminating unnecessary or ineffective projects or programs
n expanding or implementing successful programs
n enhancing the respect and credibility of the training staff
n satisfying client needs
n increasing support from managers
n strengthening relationships with key executives and administrators
n setting priorities for training
n reinventing training
n altering management’s perceptions of training
n achieving a monetary payoff for investing in training.
These key benefits, inherent with almost any type of impact evaluation process, make
additional measurement and evaluation an attractive challenge for the training function.
Measurement and Evaluation Fundamentals
Regardless of the measurement and evaluation experts you follow, the process to evalu-
ate a training program includes four fundamental steps. As shown in figure A, these steps
are evaluation planning, data collection, data analysis, and reporting. When supported by
systems, processes, and tools, a sustainable practice of accountability evolves. This is why a
focus on strategic implementation is important.
Figure A. Evaluation Process
Evaluation
Planning
Implementation
Data
Collection
Data
Analysis
Reporting
Implementation
Implementation
Implementation
17. Introduction
xvi
Evaluation Planning
The first step in any process is planning. The old adage “plan your work, work your plan” has
special meaning when it comes to comprehensive evaluation. Done well, an evaluation can
come off without a hitch. Done poorly, and evaluators scramble to decide how to go about
collecting and analyzing data.
Data Collection
Data collection comes in many forms. It is conducted at different times and involves vari-
ous data sources. Technique, timing, and sources are selected based on type of data, time
requirements, resource constraints, cultural constraints, and convenience. Sometimes sur-
veys and questionnaires are the best technique. If the goal is to assess a specific level of
knowledge acquisition, a criterion-referenced test is a good choice. Data gathered from
many sources describing how and why a program was successful or not may require the
development of case studies. Periodically, the best approach is to build data collection into
the program itself through the use of action planning. The key to successful data collection
is in knowing what techniques are available and how to use them when necessary.
Data Analysis
Through data analysis the success story unfolds. Depending on program objectives and
the measures taken, data analysis can occur in many ways. Basic statistical procedures and
content analysis can provide a good description of progress. Sometimes you need to make
a clear connection between the program and the results. This requires that you isolate
program effects through techniques such as control groups, trend line analysis, and other
subjective techniques using estimates. Occasionally, stakeholders want to see the return-
on-investment (ROI) in a program. This requires that measures be converted to monetary
values and that the fully loaded costs be developed. Forecasting ROI prior to funding a
program is an important issue for many organizations.
Reporting
The point of evaluation is to gather relevant information about a program and to report
the information to the people who need to know. Without communication, measurement
and evaluation are no more than activities. Reporting results may occur through detailed
case studies, scorecards, or executive summaries. But to make the results meaningful,
action must be taken.
Implementation
Program evaluation is an important part of the training process. But the evaluations
themselves are outputs of the processes you use. To make evaluation work and to ensure
18. Introduction
xvii
a sustainable practice, the right information must be developed and put to good use. This
requires that the right technologies be put into place at the outset, that a strategy be
developed and deployed, and that programs of all types are evaluated in such a way that
meaningful, useful information evolves.
The ASTD Handbook of Measuring and Evaluating Training
The purpose of this book is to provide learning professionals a tool to which they can
refer as they move forward with measurement and evaluation. Each step in the training
evaluation process is addressed by experts from corporations, nonprofits, government en-
tities, and academic institutions, as well as those experts who work with a broad range of
organizations. Readers will have the opportunity to learn, reflect upon, and practice using
key concepts. The handbook will assist readers as they
n plan an evaluation project, beginning with the identification of stakeholder needs
n identify appropriate data collection methods, given the type of data, resources,
constraints, and conveniences
n analyze data using basic statistical and qualitative analysis
n communicate results given the audience and their data needs
n use data to improve programs and processes, ensuring the right data are available
at the right time.
Scope
This handbook covers various aspects of training measurement and evaluation. Intended
to provide readers a broad look at these aspects, the book does not focus on any one par-
ticular methodology. Rather, each chapter represents an element of the four steps and
implementation of evaluation as described above. The book includes five parts.
Section I, Evaluation Planning, looks at the three steps important to planning an evalu-
ation project. Beginning with identifying stakeholder needs, developing program objec-
tives, then planning the evaluation project, an evaluator is likely to have a successful
project implementation.
Section II, Data Collection, covers various ways in which evaluation data can be col-
lected. Although the chapter leads with surveys and questionnaires, other techniques are
described. Techniques include using criterion-referenced tests, interviews, focus groups,
and action plans. In addition, the Success Case Method is described, as is using perfor-
mance records in collecting data.
19. Introduction
xviii
Section III, Data Analysis, looks at key areas involved in analyzing data, including the use
of statistics and qualitative methods. Other topics include how to isolate the effects of a
program from other influences, convert data to monetary value, and identify program costs
to ensure fully loaded costs are considered when assuming the training investment. In ad-
dition, a chapter has been included on calculating ROI, an important element given today’s
need to understand value before investing in a program.
Section IV, Measurement and Evaluation at Work, describes key issues in ensuring a
successful, sustainable evaluation implementation. This part begins with estimating the
future value of training investment and reporting and communicating results. All too
often data are collected and analyzed, only to sit idle. Then the issue of giving CEOs the
data they really want is covered as the industry still often misses the mark when it comes
to providing data important to the CEO. Of course, even if the data are the right data,
if they are not put to use, they serve no real purpose in improving programs. With this
issue in mind we’ve included a chapter on using evaluation data. To ensure a long-term
approach to evaluation is integrated in the training function, a strategy for success is a
must. In addition, the right technology must be selected to support this strategy. Chapters
on implementing and sustaining a measurement practice and selecting technology are
included in this section. Section IV wraps up with four case studies describing the evalu-
ation of different types of programs.
Section V, Voices, is a summary of interviews with the experts in training measurement
and evaluation. Rebecca Ray spent time with each expert, asking them their views of
the status of training measurement and evaluation. This summary section provides
readers a flavor of those interviews, which are available as podcasts at www.astd.org/
HandbookofMeasuringandEvaluatingTraining.
Contributors
Contributors were selected based on their expertise in each area. Expertise, in this case,
is not defined by how many books one has written or how well known one is in the in-
dustry. Rather, expertise is defined by what these contributors are actually doing with
training evaluation. Readers will hear from external consultants who touch a wide variety
of organizations, internal consultants who focus on training evaluation within a single or-
ganization, individuals who have experience as both an internal and external experts, and
professors who hone and share their expertise through research. Our contributors work in
organizations across the United States, Germany, Indonesia, and Dubai, giving the book
an international context.
20. Target Audience
Four groups serve as the target audience for this book. First and foremost, this publi-
cation is a tool that all training professionals need to round out their resource library.
Managers of training professionals are another target audience. This resource will sup-
port them as they support evaluation within their function. Professors who teach training
evaluation will find this publication a good resource to address all elements of the evalu-
ation process. The exercises and references will help professors as they develop course-
work, challenge their students’ thinking, and assign application projects. Finally, students
of training evaluation will find this publication valuable as they set off to learn more about
evaluation and how it drives excellence in program implementation.
How to Get the Most from the Book
The book is designed to provide a learning experience as well as information. Each chap-
ter begins with key learning objectives. Throughout the text authors have included refer-
ences to real-life applications, practitioner tips from individuals applying the concepts,
additional resources and references, and knowledge checks to assess the reader’s under-
standing of the chapter content. Some knowledge checks have specific correct answers
that can be found in the appendix; others offer an opportunity for the reader to reflect
and discuss with their colleagues. To get the most out of the book readers should
1. review the table of contents to see what areas are of most interest
2. read the objectives of the chapter of interest and upon completion of the chapter, work
through the knowledge check
3. follow up on prescribed action steps, references, and resources presented in the chapter
4. participate in the ASTD Evaluation & ROI blog (www1.astd.org/Blog/category/
Evaluation-and-ROI.aspx), where additional content is presented and discussed among
your colleagues.
We hope you find the ASTD Handbook of Measuring and Evaluating Training a useful re-
source full of relevant and timely content. Over time we will add to this content through our
Evaluation & ROI blog, the Measuring and Evaluating Training website, and other chan-
nels of delivery. As you read the content and have suggestions for additional information,
workshops in content areas, and supporting material, please let us know. You can reach me
at patti@roiinstitute.net, and I will work with ASTD to ensure learning professionals get the
information they need for successful training measurement and evaluation.
Introduction
xix
h
21. References
Basarab, D. J. and D. K. Root. (1992). The Training Evaluation Process. Boston: Kluwer Academic
Publications.
Phillips, J. J. and P. P. Phillips. (2010). Measuring for Success. What CEOs Really Think About Learn-
ing Investments. Alexandria: ASTD.
Rossi, P. H., H. E. Freeman, and M. W. Lipsey. (1999). Evaluation: A Systematic Approach 6th ed.
Thousand Oaks, CA: Sage.
Introduction
xx