Here are the key points an orientation for an evaluation team should cover:
- Purpose and scope of the evaluation
- Roles and responsibilities of team members
- Evaluation questions and intended uses of findings
- Important stakeholders and how they will be engaged
- Evaluation design, methodology, data collection procedures
- Analysis plan and timeline for delivering findings
- Resources and support available to the team
- Expectations for team communication and collaboration
The team should include staff with skills in research design, data collection/management, statistical analysis, and experience with the program/population being evaluated. Regular team meetings are important to track progress and address any issues.
Center for Applied Research at CPCC 2013
3. DEVELOP A
Forecasting - Estimating the future value of training investments: Creating ...Daniel McLinden
Implementing a program and then waiting a year to find out if that program has made a difference may not be a practical response to scrutiny from the executive suite. A better way to respond is to include forecasting when planning investments intended to develop human capital. Forecasting impact and ROI illustrates what resources, processes, and supports are necessary to achieve impact. This session will show you how to adeptly take evaluation methods and move these earlier in the program’s life-cycle. We will demonstrate how to apply ROI evaluation methods to forecasting and complement these methods with additional methods. In particular, we will cover the use of logic modeling to map the causal chain of events from investment to impact and ROI. A case example will illustrate each of the steps of assessing the economics; namely, defining current and anticipated costs, estimating future benefits, calculating the future Return on Investment (ROI), and conducting a sensitivity analysis.
Evaluating the quality of quality improvement training in healthcareDaniel McLinden
Quality Improvement (QI)in healthcare is an increasingly important approach to improving health outcomes, improving system performance and improving safety for patients. Effectively implementing QI methods requires knowledge of methods for the design and execution of QI projects. Given that this capability is not yet widespread in healthcare, training programs have emerged to develop these skills in the healthcare workforce. In spite of the growth of training programs, limited evidence exists about the merit and worth of these programs. We report here on a multi-year, multi-method evaluation of a QI training program at a large Midwestern academic medical center. Our methodology will demonstrate an approach to organizing a large scale training evaluation. Our results will provide best available evidence for features of the intervention, outcomes and the contextual features that enhance or limit efficacy.
Advancing the Methods of Evaluation of Quality and Safety Practice and Educa...Daniel McLinden
Improving healthcare in an organization requires individuals with the capability to design, test and implement improved processes in an organization with the capacity to support the scale and spread of improvement. If improvement capability is not widespread in the workforce then an intervention is needed to create the capability. In response to this challenge, Cincinnati Children’s designed and implemented a comprehensive Improvement Science curriculum to build capability. The program has achieved measurable improvements in both process and outcome measures of patient care and business processes. Incorporating unique design principles, this intervention served as a catalyst for quality transformation.
In this workshop we will share our perspective and provide examples with data that illustrates:
• Building support and buy-in through the design of participant selection.
• Creating an intervention to build capability that includes training but involves more than training.
• A comprehensive model based on competencies
• Expanding the four-level Kirkpatrick model evaluation with additional levels that encompass economic impact and network impact.
• Using self-assessment to evaluate learning outcomes.
Evaluating opportunities to optimize learning and economic impact: Applyin...Daniel McLinden
Systems modeling provides the evaluator with a method to empirically explore potential impact of an intervention and, in particular, the economics of features and benefits of that intervention. Assumptions that support the intervention can be tested and the varying perspectives of multiple stakeholders can be systematically evaluated as a means to both quantify the effects of their unique perspectives and also to promote consensus building and dialogue among stakeholders that is grounded in empirical evidence. This session will review the work done in a medical center to model the economics of required training programs for new employees. The specific aim was to identify the optimum timing for compliance based training utilizing economic considerations and this model as the basis for beginning discussions of policy changes within the organization. Additionally modeling the tradeoffs between optimizing economic variables and optimizing non-economic variables such as learning will be discussed.
Evaluation serves two main purposes: accountability and learning. Development agencies have tended to prioritize the first, and given responsibility for that to centralized units. But evaluation for learning is the area where observers find the greatest need today and tomorrow. A learning approach to evaluation looks to designing evaluation with learning in mind.
Exploring the Economics of Quality Improvement Education in Healthcare: An A...Daniel McLinden
What are the economics associated with a program intended to influence large scale organizational change in a healthcare setting? This work reports on the exploration of the economic linkages among the resources used and the benefits achieved from a training intervention. The training program is intended to develop quality improvement capability among training participants in a medical center. This economic evaluation involves the application of utility analysis to value the costs of the program and to estimate the benefit as the value of trained individual. Utility analysis was further enhanced by integrating the analysis within a dynamic system’s model. This extension provided a more precise understanding of the economics over time as training participants flow through a training intervention and then back into the workplace. Finally we explore the potential to quantify the linkage between interventions with learners and the impact of large scale change as a means for considering the value of the intervention.
Forecasting - Estimating the future value of training investments: Creating ...Daniel McLinden
Implementing a program and then waiting a year to find out if that program has made a difference may not be a practical response to scrutiny from the executive suite. A better way to respond is to include forecasting when planning investments intended to develop human capital. Forecasting impact and ROI illustrates what resources, processes, and supports are necessary to achieve impact. This session will show you how to adeptly take evaluation methods and move these earlier in the program’s life-cycle. We will demonstrate how to apply ROI evaluation methods to forecasting and complement these methods with additional methods. In particular, we will cover the use of logic modeling to map the causal chain of events from investment to impact and ROI. A case example will illustrate each of the steps of assessing the economics; namely, defining current and anticipated costs, estimating future benefits, calculating the future Return on Investment (ROI), and conducting a sensitivity analysis.
Evaluating the quality of quality improvement training in healthcareDaniel McLinden
Quality Improvement (QI)in healthcare is an increasingly important approach to improving health outcomes, improving system performance and improving safety for patients. Effectively implementing QI methods requires knowledge of methods for the design and execution of QI projects. Given that this capability is not yet widespread in healthcare, training programs have emerged to develop these skills in the healthcare workforce. In spite of the growth of training programs, limited evidence exists about the merit and worth of these programs. We report here on a multi-year, multi-method evaluation of a QI training program at a large Midwestern academic medical center. Our methodology will demonstrate an approach to organizing a large scale training evaluation. Our results will provide best available evidence for features of the intervention, outcomes and the contextual features that enhance or limit efficacy.
Advancing the Methods of Evaluation of Quality and Safety Practice and Educa...Daniel McLinden
Improving healthcare in an organization requires individuals with the capability to design, test and implement improved processes in an organization with the capacity to support the scale and spread of improvement. If improvement capability is not widespread in the workforce then an intervention is needed to create the capability. In response to this challenge, Cincinnati Children’s designed and implemented a comprehensive Improvement Science curriculum to build capability. The program has achieved measurable improvements in both process and outcome measures of patient care and business processes. Incorporating unique design principles, this intervention served as a catalyst for quality transformation.
In this workshop we will share our perspective and provide examples with data that illustrates:
• Building support and buy-in through the design of participant selection.
• Creating an intervention to build capability that includes training but involves more than training.
• A comprehensive model based on competencies
• Expanding the four-level Kirkpatrick model evaluation with additional levels that encompass economic impact and network impact.
• Using self-assessment to evaluate learning outcomes.
Evaluating opportunities to optimize learning and economic impact: Applyin...Daniel McLinden
Systems modeling provides the evaluator with a method to empirically explore potential impact of an intervention and, in particular, the economics of features and benefits of that intervention. Assumptions that support the intervention can be tested and the varying perspectives of multiple stakeholders can be systematically evaluated as a means to both quantify the effects of their unique perspectives and also to promote consensus building and dialogue among stakeholders that is grounded in empirical evidence. This session will review the work done in a medical center to model the economics of required training programs for new employees. The specific aim was to identify the optimum timing for compliance based training utilizing economic considerations and this model as the basis for beginning discussions of policy changes within the organization. Additionally modeling the tradeoffs between optimizing economic variables and optimizing non-economic variables such as learning will be discussed.
Evaluation serves two main purposes: accountability and learning. Development agencies have tended to prioritize the first, and given responsibility for that to centralized units. But evaluation for learning is the area where observers find the greatest need today and tomorrow. A learning approach to evaluation looks to designing evaluation with learning in mind.
Exploring the Economics of Quality Improvement Education in Healthcare: An A...Daniel McLinden
What are the economics associated with a program intended to influence large scale organizational change in a healthcare setting? This work reports on the exploration of the economic linkages among the resources used and the benefits achieved from a training intervention. The training program is intended to develop quality improvement capability among training participants in a medical center. This economic evaluation involves the application of utility analysis to value the costs of the program and to estimate the benefit as the value of trained individual. Utility analysis was further enhanced by integrating the analysis within a dynamic system’s model. This extension provided a more precise understanding of the economics over time as training participants flow through a training intervention and then back into the workplace. Finally we explore the potential to quantify the linkage between interventions with learners and the impact of large scale change as a means for considering the value of the intervention.
This was a paper presented to the 12th European Evaluation Society Biennial conference, Maastricht, Netherlands. This paper looks at "Use of Evaluation results to enhance organizational effectiveness Do evaluation findings improve organisational effectiveness?"
Advance Care Plans for children and young people with life-threatening and li...NIHR CLAHRC West Midlands
Advance Care Plans for children and young people with life-threatening and life-limiting conditions: Developing an evidence based strategy for improvement - Dr Karen Shaw (Theme 1 – Maternity & Child Health) - Programme Steering Committee meeting on 12th March 2015
The presentation is a summary of the key issues raised by Professor Mark Reed in his book "The research handbook" which looks to address the current REF 2021 agenda.
Untangling some challenges and opportunities in water research on the African continent today – with focus on domestic and agricultural use
Presentation: Stella Williams,
Agricultural Economist, Professor
Obafemi Awolowo University, Ile Ife, Osun State, Nigeria
The International Forum on Water and Food (IFWF) is the premier gathering of water and food scientists working on improving water management for agricultural production in developing countries.
The CGIAR Challenge Program for Water and Food (CPWF) represents one of the most comprehensive investments in the world on water, food and environment research.The Forum explores how the CPWF research-for-development (R4D) approach can address water and food challenges through a combination of process, institutional and technical innovations.
Day 2 keynote: Sanjeev Sridharan, University of Toronto: “Research and evaluation in global health policy processes”
Workshop on Approaches and Methods for Policy Process Research, co-sponsored by the CGIAR Research Programs on Policies, Institutions and Markets (PIM) and Agriculture for Nutrition and Health (A4NH) at IFPRI-Washington DC, November 18-20, 2013.
This was a paper presented to the 12th European Evaluation Society Biennial conference, Maastricht, Netherlands. This paper looks at "Use of Evaluation results to enhance organizational effectiveness Do evaluation findings improve organisational effectiveness?"
Advance Care Plans for children and young people with life-threatening and li...NIHR CLAHRC West Midlands
Advance Care Plans for children and young people with life-threatening and life-limiting conditions: Developing an evidence based strategy for improvement - Dr Karen Shaw (Theme 1 – Maternity & Child Health) - Programme Steering Committee meeting on 12th March 2015
The presentation is a summary of the key issues raised by Professor Mark Reed in his book "The research handbook" which looks to address the current REF 2021 agenda.
Untangling some challenges and opportunities in water research on the African continent today – with focus on domestic and agricultural use
Presentation: Stella Williams,
Agricultural Economist, Professor
Obafemi Awolowo University, Ile Ife, Osun State, Nigeria
The International Forum on Water and Food (IFWF) is the premier gathering of water and food scientists working on improving water management for agricultural production in developing countries.
The CGIAR Challenge Program for Water and Food (CPWF) represents one of the most comprehensive investments in the world on water, food and environment research.The Forum explores how the CPWF research-for-development (R4D) approach can address water and food challenges through a combination of process, institutional and technical innovations.
Day 2 keynote: Sanjeev Sridharan, University of Toronto: “Research and evaluation in global health policy processes”
Workshop on Approaches and Methods for Policy Process Research, co-sponsored by the CGIAR Research Programs on Policies, Institutions and Markets (PIM) and Agriculture for Nutrition and Health (A4NH) at IFPRI-Washington DC, November 18-20, 2013.
This presentation has a vivid description of the basics of doing a program evaluation, with detailed explanation of the " Log Frame work " ( LFA) with practical example from the CLICS project. This presentation also includes the CDC framework for evaluation of program.
N.B: Kindly open the ppt in slide share mode to fully use all the animations wheresoever made.
Presented by:
Dr. Lisa D’Adamo-Weinstein, Director of Academic Support , SUNY Empire State College
Dr. Tacy Holliday, Governance Coordinator, Montgomery College, NCLCA Learning Center Leadership Level
Description: Measuring and evaluating student success is crucial to retention efforts and program development. Join us as we talk about the key elements necessary to measure student success in your tutoring and learning centers. We will assist you in developing an assessment plan for your own center.
Evaluation is the process of collecting data on a programme to determine its value or worth with the aim of deciding whether to adopt, reject, or revise the programme. The public want to know whether the curriculum implemented has achieved its aims and objectives; teachers want to know whether what they are doing in the classroom is effective; and the developer or planner wants to know how to improve the curriculum product.
This gives the information about programme evaluation, planning of evaluation, requirement and purpose of evaluation, steps involved in evaluation, Uses of evaluation, Stakeholder and their role in evaluation, finding and analysing the result of evaluation, Standards of effective evaluation, utilization of evaluation.
Building Our Practice: Integrating Instruction and Student Services3CSN
Consider first year experience as a framework for successful collaboration between instruction and support services;
learn about Pasadena City College's Pathways Program and Fullerton College's Entering Scholars Program, two first year experience programs designed to integrate instruction and support services;
Discuss literature relevant to integrating instruction and support services; and
Engage in guided inquiry to explore ways of building professional practice around the integration of instruction and support services on your own campus
First Year Pathways: Giving Students a Strong Start 3CSN
Presentation at the 2nd Annual LACCD AtD Retreat by Crystal Kiekel, Director of the Center for Student Success at Pierce College; 3CSN College & Career Readiness Coordinator; PIP Facilitator, Jessica Cristo, ADELANTE Director, East Los Angeles College; PIP Facilitator, Andrew Sanchez, Harbor College
Presentation on 3CSN's Community of Practice, Habits of Mind, by Jan Connal, Counselor, Fullerton College; 3CSN Habits of Mind Coordinator and Roza Ekimyan, Couselor, Pierce College; 3CSN L.A. Regional Coordinator at the 2nD Annual LACCD AtD Retreat.
1. EVALUATING
STUDENT SUCCESS
INITIATIVES LACCD
Student
Success
& 3CSN
Summit
Making Sure Things Work Before
We Scale Them Up
Center for Applied Research at CPCC 2013
2. PURPOSE
We want to take some time to discuss
common misconceptions and issues
experienced by colleges around the subject of
evaluation.
We want to understand the differences
between evaluation and research.
We want to know how to develop and
implement a good evaluation for an
intervention or program.
Center for Applied Research at CPCC 2013
3. PROGRAM EVALUATION
What is evaluation?
Evaluation is a profession composed of persons with
varying interests, potentially encompassing but not limited
to the evaluation of programs, products, personnel, policy,
performance, proposals, technology, research, theory and
even of evaluation itself.
Go to: http://www.eval.org
At the bottom of the homepage there is a link to a
free training package and facilitators guide for
teaching the Guiding Principles for Evaluator
Training
Center for Applied Research at CPCC 2013
4. MORE ON EVALUATION
As defined by the American Evaluation
Association, evaluation involves assessing the
strengths and weaknesses of programs, policies,
personnel, products, and organizations to improve
their effectiveness.
Evaluation is the systematic collection and analysis of data
needed to make decisions, a process in which most well-
run programs engage from the outset. Here are just some
of the evaluation activities that are already likely to be
incorporated into many programs or that can be added
easily:
Pinpointing the services needed for example, finding out
what knowledge, skills, attitudes, or behaviors a program
should address
Center for Applied Research at CPCC 2013
5. CONTINUED
Establishing program objectives and deciding the particular
evidence (such as the specific knowledge, attitudes, or
behavior) that will demonstrate that the objectives have been
met. A key to successful evaluation is a set of clear,
measurable, and realistic program objectives. If objectives are
unrealistically optimistic or are not measurable, the program
may not be able to demonstrate that it has been successful
even if it has done a good job
Developing or selecting from among alternative program
approaches for example, trying different curricula or policies
and determining which ones best achieve the goals
Center for Applied Research at CPCC 2013
6. CONTINUED
Tracking program objectives for example, setting up
a system that shows who gets services, how much
service is delivered, how participants rate the
services they receive, and which approaches are
most readily adopted by staff
Trying out and assessing new program designs
determining the extent to which a particular
approach is being implemented faithfully by school
or agency person
Center for Applied Research at CPCC 2013
7. PROGRAM EVALUATION
Purpose
To establish better products, personnel, programs,
organizations, governments, consumers and the public
interest; to contribute to informed decision making and more
enlightened change; precipitating needed change;
empowering all stakeholders by collecting data from them
and engaging them in the evaluation process; and
experiencing the excitement of new insights.
Evaluators aspire to construct and provide the best possible
information that might bear on the value of whatever is being
evaluated.
Center for Applied Research at CPCC 2013
8. Definition of Evaluation
Study designed and conducted to assist some
audience to assess an object’s merit and worth.
(Stufflebeam, 1999)
Identification of defensible criteria to determine an
evaluation object’s value (worth or merit), quality, utility,
effectiveness, or significance in relation to those
criteria. (Fitzpatrick, Sanders & Worthen, 2004)
Center for Applied Research at CPCC 2013
9. Definition of Evaluation
Goal 1
Determine the merit or worth of an evaluand.
(Scriven 1991)
Goal 2
Provide answers to significant evaluative questions that
are posed
It is a value judgment based on
defensible criteria
Center for Applied Research at CPCC 2013
10. Evaluation Questions
Provide the direction and foundation for the
evaluation (without them the evaluation will
lack focus)
The evaluation’s focus will determine
the questions asked.
Need Process Outcomes
Assessment Evaluation Evaluation
Questions? Questions? Questions?
Center for Applied Research at CPCC 2013
11. TYPES OF EVALUATION
Process evaluation – determines if the
processes are happening according to the plan
The processes of a program are the “nitty-
gritty” details or the “dosage” students,
patients or clients receive – the activities
It is the who is going to do what and when
It answers the question “Is this program being
delivered as it was intended.”
Center for Applied Research at CPCC 2013
12. TYPES OF EVALUATION
Outcome evaluation (most critical piece for
accreditation)
determines how participants do on short-range,
mid-range or long-range outcomes
Usually involves setting program goals and
outcome objectives
Answers the question “is this program working”
and/or “are participants accomplishing what we
intended for them to accomplish”
Center for Applied Research at CPCC 2013
13. TYPES OF EVALUATION
Impact evaluation
How did the results impact the student
group, college, community, family
(larger group over time)
Answers the question “Is this program
having the impact it was intended to
have (so you must start with
intentions)?”
Center for Applied Research at CPCC 2013
14. TWO MAJOR TYPES OF EVALUATION
Center for Applied Research at CPCC 2013
15. IR DEPARTMENTS
The Good News Is…..
You are all data people
The Bad News Is….
You are all data people
Sometimes have difficulty realizing
this is not research and demands
more than data from your student
system
Center for Applied Research at CPCC 2013
16. Evaluation Research
Use intended for use – use is the produces knowledge – lets
rationale the natural process determine
use
Questions the decision-maker, not evaluator, the researcher determines the
comes up with the questions to questions
study.
Judgment compares what is with what studies what is
should be – does it meet
established criteria
Setting action setting/priority is to the priority is to the research, not
program, not the evaluation what is being studied
Roles friction among evaluator’s roles not the friction; research vs.
and program giver’s roles because funder – no friction
of
judgmental qualities of evaluator.
Center for Applied Research at CPCC 2013
18. INTERVENTIONS HAVE QUESTIONABLE
SUCCESS
The evaluated don’t take into consideration all
factors including methodology and quality of
implementation
College needs to have a realistic/courageous
conversation on standards of evidence, statistical
significance and expectations
Spend most of the time planning the interventions,
not on how to evaluate it
Never define what success should look like,
reasonable target
Center for Applied Research at CPCC 2013
19. INTERVENTIONS ARE OFTEN TOO
COMPLICATED
Multiple layers of independent variables
College lacks the staff, software or ability to carry it
out.
Groups keep getting smaller and smaller (for sample
or comparison groups).
Don’t really know what worked.
Expansion happens too quickly.
Center for Applied Research at CPCC 2013
20. INTERVENTIONS HAVE QUESTIONABLE
ABILITY TO BE ADAPTED ON A LARGE
SCALE
Not enough consideration of the costs of scaling
Don’t want to cancel plans involving un-scalable
interventions (someone’s pet project)
Develop culture where it is ok to take risk and learn
from mistakes
Center for Applied Research at CPCC 2013
21. THE COLLEGE SKEPTIC
The one who wants everything to be
statistically significant
The faculty group who wants to talk about
confidence intervals or power
Fear that things won’t work
“We tried that before”
They confuse evaluation with research.
Center for Applied Research at CPCC 2013
22. LIMITED ABILITY TO EVALUATE.
Whole concept is new to many.
Funders forces us to begin the process.
May be no one at the institution to lead them
through it (health faculty are the best place to
start).
Don’t know what resources are out there?
Center for Applied Research at CPCC 2013
23. ANALYSIS PARALYSIS
Let’s splice and dice the data more and more
and more.
Too much data to analyze
Don’t know what it tells them
How do we make a decision about priorities
and strategies from 200 pages of data
tables?
Center for Applied Research at CPCC 2013
24. THE SUMMER HIATUS
Faculty leave in June and never give the initiative
a thought until August 20th.
No interventions are in place when fall term
begins
No evaluation tools are in place.
Baseline data cannot be collected.
August 20-31 they are mostly concerned with
preparing for fall classes (as they should).
Center for Applied Research at CPCC 2013
25. NO WORKABLE EVALUATION TIMELINES
Creating a timeline.
Identifying all the detail.
Getting a team to actually follow it.
Who is responsible for each piece.
Where do completed surveys/assessments
go – who scores them – who analyzes them –
who makes decisions based on them?
Center for Applied Research at CPCC 2013
26. What does a logic model look like?
Graphic display of boxes and
arrows; vertical or horizontal
Relationships, linkages.
Any shape possible
Circular, dynamic,
Cultural adaptations, storyboards.
Level of detail
Simple
Complex
Multiple models
Source / Adapted from UW-Extension:
http://www.uwex.edu/ces/pdande/evaluation/evallogicmodel.htmlResearch at CPCC 2013
Center for Applied
27. Where are you going?
How will you get there?
What will tell you that
you’ve arrived?
A logic model is your program
ROAD MAP
Center for Applied Research at CPCC 2013
28. Example: Every day logic
model –
Family Vacation
Family Members Drive to state park Family members
learn about each
Budget other; family
Set up camp
bonds; family has
Car a good time
Cook, play, talk,
laugh, hike
Camping
Equipment
Source: E Taylor-Powell, University of Wisconsin- Extension-Cooperative Extension
Center for Applied Research at CPCC 2013
29. Example: Financial
management program
Situation: Individuals with limited knowledge and skills in basic financial management are
unable to meet their financial goals and manage money to meet their needs.
INPUTS OUTPUTS OUTCOMES
Extension We conduct a variety of Participants gain
educational activities knowledge, change
invests time and
targeted to individuals practices and have
resources who participate improved financial well-
being
WHAT WE INVEST WHAT WE DO WHAT RESULTS
Source: E Taylor-Powell, University of Wisconsin- Extension-Cooperative Extension
Center for Applied Research at CPCC 2013
30. Example: One component of a
comprehensive parent education and
support initiative
During a county needs assessment, majority of parents reported that they were
Situation: having difficulty parenting and felt stressed as a result
INPUTS OUTPUTS OUTCOMES
Parents
increase Parents
Develop knowledge of identify
Staff parent ed child dev appropriate Improve
actions to d child-
curriculu
take parent
m Targeted Parents
relation
Mone better
Deliver parents understandin
s
y
series of attend g their own Parents use
parenting effective Strong
Partner interacti style parenting families
Parents gain
s vesessio skills in practices
Researc ns
Facilitat effective
parenting
h e practices
support
groups
Assumptions: External factors:
Center for Applied Research at CPCC 2013
31. Example: Smoke free
worksites
Situation: Secondhand smoke is responsible for lung cancer, respiratory symptoms, cardiovascular disease, and
worsens asthma. Public policy change that creates smoke free environments is the best known way to reduce and
prevent smoking.
Inputs Outputs Outcomes
Assess
worksite Demonstrations
Increased
tobacco Worksite awareness of of public support
Coalition
policies and owners, for SF worksites
importance of
Time practices managers SF worksites
Dollars Develop SF worksites
community Increased SF worksites
Partners Unions
support for SF knowledge of SF policies
Including worksites worksite drafted
youth benefits &
options
Workers;
Organize and
union
implement SF worksite
members Increased
strategy for policies
commitment,
targeted passed
support and
worksites
Public demand for
SF worksites
Adherence to
smoke-free
policies
Source: E Taylor-Powell, University of Wisconsin- Extension-
Cooperative Extension Center for Applied Research at CPCC 2013
32. Need AssessmentProcess EvaluationOutcomes Evaluation
Questions? Questions? Questions?
INPUT
INPUT PROCESS
PROCESS OUTCOMES
OUTCOMES
What Is the To what extent
resources are intervention are desired
needed for strategy being changes
implemented as occurring? For
starting this whom?
intervention intended?
strategy? Is the
intervention
How many Are participants strategy making
staff members being reached as a difference?
are needed? intended? What seems to
work? Not work?
Source: R. Rincones-Gomez, 2009 for Applied Research at CPCC 2013
Center
33. CHAIN OF OUTCOMES
SHORT MEDIUM LONG-TERM
Seniors increase Practice safe cooling of Lowered incidence of food
knowledge of food food; food preparation borne illness
contamination risks guidelines
Participants increase Establish financial goals, Reduced debt and
knowledge and skills in use spending plan increased savings
financial management
Community increases Residents and employers Child care needs are met
understanding of discuss options and
childcare needs implement a plan
Empty inner city parking Youth and adults learn Money saved, nutrition
lot converted to gardening skills, nutrition, improved, residents enjoy
community garden food preparation and mgt. greater sense of
community
Source: E Taylor-Powell, University of Center for Applied Research at CPCC 2013
Wisconsin- Extension-Cooperative Extension
34. WHAT ARE THE SUMMATIVE AND FORMATIVE
OUTCOME INDICATORS
Supplemental Instruction
Learning Communities
Required Orientation
Academic Success Course
Minority Male Mentoring
Developmental Math Redesign
Peer Tutoring
Accelerated English
Center for Applied Research at CPCC 2013
35. AT YOUR TABLES ……….
Select an ATD student success
initiative at your college that you plan to
evaluate before you make the decision
to scale it up. (if you can’t think of one
use the online learning one in your
handouts)
Use this program for each activity.
Center for Applied Research at CPCC 2013
36. 1. BRING TOGETHER THE PROGRAM
DEVELOPERS
Ask them to answer these question:
1. Why did you develop this program with these
program characteristics?
2. What do you think students (or participants) will
get out of this program (what changes)?
3. How do you tie specific program content to
specific expected changes or improvements in
participants.
Center for Applied Research at CPCC 2013
37. 2. ORIENT AN EVALUATION TEAM
Who should be on it?
What skills do you need at the table (what
staff members have those?)
What should be their charge?
Center for Applied Research at CPCC 2013
38. 3. GATHER INFORMATION ON
POTENTIAL OUTCOMES.
What are potential sources for
outcomes?
Center for Applied Research at CPCC 2013
39. 4. WRITE OUTCOME STATEMENTS
Sometime these are already written (from
grants)
Make them clear
Don’t draw a number out of a hat
Test it out
Create a logic model
Center for Applied Research at CPCC 2013
40. 5. CREATE OUTCOME INDICATORS
Outcome Indicator. - Usually referred to as a key
performance indicator, this is the data, or set of statistics that
best verifies the accomplishment of a specific outcome. An
outcome indicator for college readiness might be an SAT score
of 1100 or above. It is typically the accomplishment of a
specific skill or assessment at a certain level that indicates an
outcome is met.
What data can you access?
What assessments need to be selected?
Center for Applied Research at CPCC 2013
41. 6. CREATE OUTCOME TARGETS
Outcome Target – the benchmark set as a
performance indicator for a given outcome. An example
would be that 80% of students would score a 75% or above
on a reading assessment. The outcome target would be
“80% of students.”
How would you create these targets or
benchmark?
Do you need a comparison group?
What is an acceptable level of
improvement or change?
Center for Applied Research at CPCC 2013
42. 7. CREATE ALL TOOLS
You will probably need:
Demographic sheets
Attendance or participation log
Formative evaluation tools
Will they be online or pencil/paper tools
(benefits of each)
When do they need to be ready?
Who needs copies?
Create evaluation timeline.
Center for Applied Research at CPCC 2013
43. 8. PILOT TEST THE PROCESS
Make sure it works
Give a small group of student or faculty/staff the
assessments to make sure they are clear
Work out all the detail
Who distributes it
Who collects it
Who scores it
Who puts it in the spreadsheet
Who keeps up with the post-test dates, etc.
Center for Applied Research at CPCC 2013
44. 9. IMPLEMENT THE EVALUATION
Follow your plan
Center for Applied Research at CPCC 2013
45. 10. ANALYZE RESULTS
Sometimes just numbers and percents
Sometimes statistical tests are needed
If students don’t meet the summative
evaluation benchmarks, analyze the
formative evaluation
Center for Applied Research at CPCC 2013
46. 11. IMPROVE YOUR PROCESS AND
PROGRAM
Takes several years to have good data.
Discuss how the evaluation can be
improved
Discuss how the program can be
improved
Center for Applied Research at CPCC 2013
47. CLOSING
Establish your plan
Follow your plan
Assign responsibility for it
Expect big things
Use results to improve what you do (close the
loop)
Center for Applied Research at CPCC 2013
Actually, we use logic models everyday. Let’s look at this… We want to take a family vacation and what we really hope is that we’ll have a good time and enjoy being together. We have had experience and know (our own personal research tells us) that camping is something we all enjoy doing together. So, in order to take a camping trip, we need.. If this…, then that…. Logic models involve a mental process. A logic model shows the series of connections and logical linkages that is expected to result in achievement of our goal.