The Nature of Program
Evaluation
Carlo Magno, PhD
Counseling and Educational
Psychology Department
Answer the following questions:
• Why is program evaluation needed?
• What are the roles of a professional
program evaluator?
Program evaluation is needed
because…
• Policy makers need good information about the relative
effectiveness of the program.
– Which programs are working well?
– Which poorly?
– What are the program’s relative cost and benefits?
– Which parts of the program are working?
– What can be done with those parts that are not working well?
– Have al parts of the program been thought through carefully at
the planning stage?
– What is the theory or logic model for the program effectiveness?
– What adaptations would make the program more effective?
Program Evaluation
• Systematic investigation of the merit, worth or
significance of an object (Scriven, 1999), hence
assigning “value” to a program’s efforts means
addressing those three inter-related domains:
– Merit (or quality)
– Worth (or value, i.e., cost-effectiveness)
– Significance (or importance)
• The identification, clarification, and application of
defensible criteria to determine an object’s value
in relation to those criteria (Fitzpatrick,
WEorthen, & Sanders, 2004).
Prerequisite to evaluation
• Need a program: - an organized action
– Direct service interventions
– Community mobilization efforts
– Research initiatives
– Surveillance systems
– Policy development activities
– Outbreak investigations
– Laboratory diagnostics
– Communication campaigns
– Infrastructure building projects
– Training and education services
– Administrative systems
Inquiry and Judgment in Evaluation
• (1) Determining standards for judging
quality and deciding whether those
standards should be relative or absolute.
• (2) Collecting relevant information
• (3) Applying the standards to determine
value, quality, utility, effectiveness, or
significance.
Evidence of value and judgement:
• What will be evaluated? (i.e., what is "the program" and
in what context does it exist?)
• What aspects of the program will be considered when
judging program performance?
• What standards (i.e., type or level of performance) must
be reached for the program to be considered
successful?
• What evidence will be used to indicate how the program
has performed?
• What conclusions regarding program performance are
justified by comparing the available evidence to the
selected standards?
• How will the lessons learned from the inquiry be used to
improve public health effectiveness?
Difference between Research and
Evaluation
• Purpose
• Approaches
• Who sets the agenda?
• Generalizability of results
• Criteria and standards
• Preparation
Difference in Purpose
• Research
– Add knowledge in a field, contribute to theory
– Seeks conclusion
• Evaluation
– Help those who hold a stake in whatever is
being evaluated
– Leads to judgments
Difference in Approaches
• Research
– Quest for laws
– Explore and establish causal relationships
• Evaluation
– Describing a phenomenon may use causal
relationships
– Causal relationships will depend on the needs
of the stakeholders
Difference on who sets the agenda
• Research
– The hypothesis investigated is chosen by the
researcher and the appropriate steps in
developing the theory.
• Evaluation
– Questions to be answered comes form many
sources (stakeholders).
– Consults with stakeholders to determine the
focus of the study.
Difference in generalizability of
results
• Research
– Methods are designed to maximize
generalizability to many different settings
• Evaluation
– Specific to the context which evaluation object
rests.
Difference in Criteria and standards
• Research
– Internal validity (causality),
– external validity (generalizability)
• Evaluation
– Accuracy (corresponding to reality)
– Utility (results serve practical information)
– Feasibility (realistic, prudent, diplomatic, frugal)
– Propriety (done legally and ethiocally)
Difference in Preparation
• Research
– In depth training on a single discipline in their field of
inquiry.
• Evaluation
– Responds to the needs of clients and stakeholders
with many information needs and operating in many
different settings.
– Interdisciplinary: Sensitive to a wide range of
phenomenon that they must attend to.
– Familiar with a wide variety of methods
– Establish personal working relationships with clients
(interpersonal and communication skills)
Competencies needed by professional
Evaluators (Sanders, 1999)
• Ability to describe the object and context of an evaluation
• Conceptualize appropriate purposes and framework for
evaluation
• Identify and select appropriate evaluation questions,
information needs, and sources of information
• Select mans for collecting and analyzing information
• Determine the value of the object of an evaluation
• Communicate plans and results effectively to audiences
• Manage the evaluation
• Maintain ethical standards
• Adjust to external factors influencing the evaluation
• Evaluate the evaluation
Purposes of Evaluation
• Talmage (1982)
– Render judgment in the worth of the program
– Assist decision makers responsible for deciding policy
– Serve a political function
• Rallis and Rossman (2000)
– Learning, helping practitioners and others better
understand and interpret their observations
•
Purposes of Evaluation
• Weiss (1988) and Henry (2000)
– Bring about social betterment
• Mark, Henry, and Julnes (1999)
– Betterment – alleviation of social problems, meeting
of human needs
• Chelimsky (1997) – takes a global perspective:
new technologies, demographic imbalance,
environmental protection, sustainable
development, terrorism, human rights
Purposes of Evaluation
• House and Howe (1999)
– Foster deliberate democracy-work to help less
powerful stakeholders gain a voice and to stimulate
dialogue among stakeholders in a democratic fashion.
• Mark, Henry, and Julnes (1999)
– Assessment of merit and worth
– Oversight and compliance
– Program and organizational improvement
– Knowledge development
Roles of the Professional Evaluator
• Rallis and Rossman (2000)
– Critical friend: “someone the emperor knows
and can listen to. She is more friend than
judge, although she is not afraid to offer
judgment” (p. 83)
• Schwant (2001)
– Helping practitioners develop critical judgment
Roles of the Professional Evaluator
• Patton (1996)
– Facilitator
– Collaborator
– Teacher management consultant
– OD specialist
– Social-change agent
• Preskilll and Torres (1999)
– Bring about organizational learning and
instilling a learning environment
Roles of the Professional Evaluator
• Mertens (1999), Chelimsky (1998), and
Greene (1997)
– Including the stakeholders as part of the
evaluation process
• House and Howe (1999)
– Stimulating dialogue among various groups
Roles of the Professional Evaluator
• Bickman (2001) and Chen (1990)
– Take part in program planning
– Help articulate program theories or logic
model
• Wholey (1996)
– Help policy makers and managers select the
performance dimension to be measured as
well as the tools to use in measuring those
dimensions
Roles of the Professional Evaluator
• Lipsey (2000)
– Provides expertise to track things down,
systematically observe and measure them,
and compare, analyze, and interpret with a
good faith attempt at aobjectivity.
Roles of the Professional Evaluator
• Fitzpatrick, Worthen, and Sanders (2004)
– Negotiating with stakeholders group to define the purpose of
evaluation
– Developing contracts
– Hiring and overseeing staff
– Managing budgets
– Identifying disenfranchised or underrepresented groups
– Working with advisory panels
– Collecting and analyzing and interpreting qualitative and
quantitative information
– Communicating frequently with various stakeholders to seek
input into the evaluation and to report results
– Writing reports
– Considering effective ways to disseminate information
– Meeting with the press and other representatives to report on
progress and results
– Recruiting others to evaluate the evaluation
Examples of evaluation use in
Education
• To empower teachers to have more say about how
school budget are allocated
• To judge the quality of the school curricula in specific
content areas
• To accredit schools that meet minimum accreditation
standards
• To determine the value of a middle school’s block
scheduling
• To satisfy an external funding agency’s demands for
reports on effectiveness of school programs it supports
• To assist parents and students in selecting schools in a
district with school choice
• To help teachers improve their reading program to
encourage more voluntary reading
Examples of evaluation use in other
public and Nonprofit sectors
• To decide whether to implement an urban development
program
• To establish the value of a job-training program
• To decide whether to modify a low-cost housing project’s
rental policies
• To improve a recruitment program for blood donors
• To determine the impact of a prison’s early release
program in recidivism
• To gauge community reaction to proposed fire-burning
restrictions to improve air quality
• To determine the cost-benefit contribution of a new
sports stadium for a metropolitan area
Examples of evaluation use in
Business and industry
• To improve a commercial product
• To judge the effectiveness of a corporate training
program on teamwork
• To determine the effect of a new flextime policy on
productivity, recruitment, and retention
• To identify the contributions of specific programs to
corporate profits
• To determine the public’s perception of a corporation’s
environmental image
• To recommend ways to improve retention among
younger employees
• To study the quality of performance-appraisal dfeedback
Formative and Summative
Evaluation
• Formative – provide information for
program improvement. Judgment of a part
of a program.
• Summative – concerned with providing
information to serve decisions or assist in
making judgments about program
adoption, continuation or expansion.

The nature of program evaluation

  • 1.
    The Nature ofProgram Evaluation Carlo Magno, PhD Counseling and Educational Psychology Department
  • 2.
    Answer the followingquestions: • Why is program evaluation needed? • What are the roles of a professional program evaluator?
  • 3.
    Program evaluation isneeded because… • Policy makers need good information about the relative effectiveness of the program. – Which programs are working well? – Which poorly? – What are the program’s relative cost and benefits? – Which parts of the program are working? – What can be done with those parts that are not working well? – Have al parts of the program been thought through carefully at the planning stage? – What is the theory or logic model for the program effectiveness? – What adaptations would make the program more effective?
  • 4.
    Program Evaluation • Systematicinvestigation of the merit, worth or significance of an object (Scriven, 1999), hence assigning “value” to a program’s efforts means addressing those three inter-related domains: – Merit (or quality) – Worth (or value, i.e., cost-effectiveness) – Significance (or importance) • The identification, clarification, and application of defensible criteria to determine an object’s value in relation to those criteria (Fitzpatrick, WEorthen, & Sanders, 2004).
  • 5.
    Prerequisite to evaluation •Need a program: - an organized action – Direct service interventions – Community mobilization efforts – Research initiatives – Surveillance systems – Policy development activities – Outbreak investigations – Laboratory diagnostics – Communication campaigns – Infrastructure building projects – Training and education services – Administrative systems
  • 6.
    Inquiry and Judgmentin Evaluation • (1) Determining standards for judging quality and deciding whether those standards should be relative or absolute. • (2) Collecting relevant information • (3) Applying the standards to determine value, quality, utility, effectiveness, or significance.
  • 7.
    Evidence of valueand judgement: • What will be evaluated? (i.e., what is "the program" and in what context does it exist?) • What aspects of the program will be considered when judging program performance? • What standards (i.e., type or level of performance) must be reached for the program to be considered successful? • What evidence will be used to indicate how the program has performed? • What conclusions regarding program performance are justified by comparing the available evidence to the selected standards? • How will the lessons learned from the inquiry be used to improve public health effectiveness?
  • 8.
    Difference between Researchand Evaluation • Purpose • Approaches • Who sets the agenda? • Generalizability of results • Criteria and standards • Preparation
  • 9.
    Difference in Purpose •Research – Add knowledge in a field, contribute to theory – Seeks conclusion • Evaluation – Help those who hold a stake in whatever is being evaluated – Leads to judgments
  • 10.
    Difference in Approaches •Research – Quest for laws – Explore and establish causal relationships • Evaluation – Describing a phenomenon may use causal relationships – Causal relationships will depend on the needs of the stakeholders
  • 11.
    Difference on whosets the agenda • Research – The hypothesis investigated is chosen by the researcher and the appropriate steps in developing the theory. • Evaluation – Questions to be answered comes form many sources (stakeholders). – Consults with stakeholders to determine the focus of the study.
  • 12.
    Difference in generalizabilityof results • Research – Methods are designed to maximize generalizability to many different settings • Evaluation – Specific to the context which evaluation object rests.
  • 13.
    Difference in Criteriaand standards • Research – Internal validity (causality), – external validity (generalizability) • Evaluation – Accuracy (corresponding to reality) – Utility (results serve practical information) – Feasibility (realistic, prudent, diplomatic, frugal) – Propriety (done legally and ethiocally)
  • 14.
    Difference in Preparation •Research – In depth training on a single discipline in their field of inquiry. • Evaluation – Responds to the needs of clients and stakeholders with many information needs and operating in many different settings. – Interdisciplinary: Sensitive to a wide range of phenomenon that they must attend to. – Familiar with a wide variety of methods – Establish personal working relationships with clients (interpersonal and communication skills)
  • 15.
    Competencies needed byprofessional Evaluators (Sanders, 1999) • Ability to describe the object and context of an evaluation • Conceptualize appropriate purposes and framework for evaluation • Identify and select appropriate evaluation questions, information needs, and sources of information • Select mans for collecting and analyzing information • Determine the value of the object of an evaluation • Communicate plans and results effectively to audiences • Manage the evaluation • Maintain ethical standards • Adjust to external factors influencing the evaluation • Evaluate the evaluation
  • 16.
    Purposes of Evaluation •Talmage (1982) – Render judgment in the worth of the program – Assist decision makers responsible for deciding policy – Serve a political function • Rallis and Rossman (2000) – Learning, helping practitioners and others better understand and interpret their observations •
  • 17.
    Purposes of Evaluation •Weiss (1988) and Henry (2000) – Bring about social betterment • Mark, Henry, and Julnes (1999) – Betterment – alleviation of social problems, meeting of human needs • Chelimsky (1997) – takes a global perspective: new technologies, demographic imbalance, environmental protection, sustainable development, terrorism, human rights
  • 18.
    Purposes of Evaluation •House and Howe (1999) – Foster deliberate democracy-work to help less powerful stakeholders gain a voice and to stimulate dialogue among stakeholders in a democratic fashion. • Mark, Henry, and Julnes (1999) – Assessment of merit and worth – Oversight and compliance – Program and organizational improvement – Knowledge development
  • 19.
    Roles of theProfessional Evaluator • Rallis and Rossman (2000) – Critical friend: “someone the emperor knows and can listen to. She is more friend than judge, although she is not afraid to offer judgment” (p. 83) • Schwant (2001) – Helping practitioners develop critical judgment
  • 20.
    Roles of theProfessional Evaluator • Patton (1996) – Facilitator – Collaborator – Teacher management consultant – OD specialist – Social-change agent • Preskilll and Torres (1999) – Bring about organizational learning and instilling a learning environment
  • 21.
    Roles of theProfessional Evaluator • Mertens (1999), Chelimsky (1998), and Greene (1997) – Including the stakeholders as part of the evaluation process • House and Howe (1999) – Stimulating dialogue among various groups
  • 22.
    Roles of theProfessional Evaluator • Bickman (2001) and Chen (1990) – Take part in program planning – Help articulate program theories or logic model • Wholey (1996) – Help policy makers and managers select the performance dimension to be measured as well as the tools to use in measuring those dimensions
  • 23.
    Roles of theProfessional Evaluator • Lipsey (2000) – Provides expertise to track things down, systematically observe and measure them, and compare, analyze, and interpret with a good faith attempt at aobjectivity.
  • 24.
    Roles of theProfessional Evaluator • Fitzpatrick, Worthen, and Sanders (2004) – Negotiating with stakeholders group to define the purpose of evaluation – Developing contracts – Hiring and overseeing staff – Managing budgets – Identifying disenfranchised or underrepresented groups – Working with advisory panels – Collecting and analyzing and interpreting qualitative and quantitative information – Communicating frequently with various stakeholders to seek input into the evaluation and to report results – Writing reports – Considering effective ways to disseminate information – Meeting with the press and other representatives to report on progress and results – Recruiting others to evaluate the evaluation
  • 25.
    Examples of evaluationuse in Education • To empower teachers to have more say about how school budget are allocated • To judge the quality of the school curricula in specific content areas • To accredit schools that meet minimum accreditation standards • To determine the value of a middle school’s block scheduling • To satisfy an external funding agency’s demands for reports on effectiveness of school programs it supports • To assist parents and students in selecting schools in a district with school choice • To help teachers improve their reading program to encourage more voluntary reading
  • 26.
    Examples of evaluationuse in other public and Nonprofit sectors • To decide whether to implement an urban development program • To establish the value of a job-training program • To decide whether to modify a low-cost housing project’s rental policies • To improve a recruitment program for blood donors • To determine the impact of a prison’s early release program in recidivism • To gauge community reaction to proposed fire-burning restrictions to improve air quality • To determine the cost-benefit contribution of a new sports stadium for a metropolitan area
  • 27.
    Examples of evaluationuse in Business and industry • To improve a commercial product • To judge the effectiveness of a corporate training program on teamwork • To determine the effect of a new flextime policy on productivity, recruitment, and retention • To identify the contributions of specific programs to corporate profits • To determine the public’s perception of a corporation’s environmental image • To recommend ways to improve retention among younger employees • To study the quality of performance-appraisal dfeedback
  • 28.
    Formative and Summative Evaluation •Formative – provide information for program improvement. Judgment of a part of a program. • Summative – concerned with providing information to serve decisions or assist in making judgments about program adoption, continuation or expansion.