2. MANAGEMENT-ORIENTED EVALUATION APPROACHES
Is meant to serve decision makers.
RATIONALE
- Evaluation information is an essential part of good decision making is essential part
of good decision making and that the evaluator can be most effective by serving
administrators managers, policy makers boards, practioners, and others who need
good evaluation information.
DEVELOPERS
Have relied on system approach to evaluation in which decision are made inputs,
process and output program model and program theory.
DECISION MAKER
Audience to whom a management – oriented evaluation is directed.
Concerns informational needs and criteria for effectiveness pride to direction to the
study.
3. DEVELOPERS OF THE MANAGEMENT-ORIENTED EVALUATION
APPROACH AND THEIR CONTRIBUTION
STUFFLE LOAN (1968)
A recognized the short comings of available approaches.
Working to expand and systematized thinking about administrative studies and educational
decision making.
HENRY BERNARD, HORACE MANN, WILLIAM TOREY HARIS,
CARLITON WASHBURNE
(1960’s and 1970’s)
They drew from management theory.
STUFFLEBEAM (1968)
Made the decisions of program managers the priotal organizer for the evaluation rather than
program objectives.
ALKINS (1969)
Evaluation, working closely with the administrators identifies the decision the administrators
must make and them collects sufficient information about the relative advantages, disadvantages
of each decision alternative to allow a fair judgment based on specified criteria.
Success of evaluation rest on the quality of teamwork and between evaluation decision makers.
4. CIPP EVALUATIONMODEL
Stuffbeam (1971, 2000) Shinkfield, 1985 proponent of decision-oriented evaluation approaches
structured to help administrators make good decisions.
View Evaluation as “the process of deliberating, obtaining, and providing useful information for
judging decision alternatives.
He develop evaluation framework to serve managers and administrators forcing from different
kinds of decisions.
1. Context Evaluation to serve planning decision: determining:
– What needs are to be addressed by a program.
– What the programs already exist helps in defining objectives for the program.
2. Input Evaluation – to serve structuring decision.
– What resources are available
– What alternatives strategies for the program should be considered.
– What plan seems to have to best potential for meetings needs foliates design of program
procedures.
2. Process Evaluation (serve implementing decisions)
– How well is the plan being implemented?
– What barriers threaten its success?
– What revision are needed?
5. 4. Once these questions are answered, procedures can be monitored, controlled and
refined.
– Product Evaluation (to serve recycling decision)
– What results were obtained?
– How well were needs reduced?
– What should be done with the program after it has run its course?
– These question are important in judging program attainments.
Stufflebeam’s Evaluation Model
CIPP – Acronym – Context, Input, Process and Product.
General Steps of Stufflebeam – proposed evaluation.
6. A. Focusing the Evaluation
1. Identify the major level of decision to be served.
2. Ex. Local, state, rational
3. Each level of decision making project the decision to be served and describe each on in terms of its
focus, critical timing and composition of alternatives.
4. Define criteria for each decision situation by specifying variables for measurement and standard for use
in its judgment of alternatives.
5. Define policies within which the evaluation must operate.
B. Collection of Information
1. Specify the source of information to be collected.
2. Specify the instruments and method for collecting the needed information.
3. Specify the sampling procedure to be employed
– Specify the conditions and schedule for the information collected
C. Organization of Information
1. Provide a format for the information that is to be collected.
2. Designate a means for performing the analysis.
D. Analysis of Information
1. Select the analytical procedures to be employed.
2. Designate a means for performing the analysis.
E. Reporting of Information
1. Define the audience for the evaluation reports.
2. Specify means for providing information to the audiences.
3. Specify the format for evaluation reports,
4. Schedule the reporting
F. Administration of the Evaluation.
1. Summarize the evaluation schedule.
2. Define Staff and resource requirements and plans for meeting these requirements.
3. Specify means for meeting policy requirements for conduct of the evaluation.
4. Evaluate the potential of evaluation design for providing information that is valid reliable, credible,
timely, and persuasive (all stake-holder)
7. FOUR TYPES OF EVALUATION
CONTEXT EVALUATION
Objectives
– To define the institutional context identify the target population and assess its needs
– To identify opportunities for addressing the needs
– To diagnose problems underlying the needs
– To judge whether prosed objectives are sufficiently responsive
– To assessed needs
Method
– By using methods as system analysis, survey, document review, hearings, interviews, diagnostic tests,
and the Delphi technique
Relation to decision making in the change process
– For deciding on the setting to be served, the goals associated with meeting needs or using
opportunities, and the objectives associated with solving problems.
INPUT EVALUATION
Objective
– To identify ad assess system capabilities, alternative program strategies, procedural designs for
implementing the strategies, budgets, and schedules.
Method
– By inventorying and analyzing available human and material resources, solution strategies, and
procedural designs for relevance, feasibility and economy, and by using such method as a literature
search, visits to exemplary programs, advocate teams, and pilot trials
Relation to decision making in the change process
– For selecting sources of support, solution strategies, and procedural designs – that is, for structuring
change activities – and to provide a basis for judging implementation
8. PROCESS EVALUATION
Objectives
– To identify or predict in process defects in the procedural design or its implementation, to
provide information for the preprogrammed decision, and to record and judge procedural
events and activities
Method
– By monitoring the activity’s potential procedural barriers and remaining alert to unanticipated
ones, by obtaining specified information for programmed decisions, by describing the actual
process and y continually interacting with and observing the activities of project staff.
Relation to decision making in the change process
– For implementing and refining the program design and procedure – that is, for effecting
process control – and to provide a log of the actual process for later use in interpreting
outcomes
PRODUCT EVALUATION
Objective
– To collection descriptions and judgments of outcomes and to relate them to objectives and to
context, input and process information and to interpret their worth and merit.
Method
– By defining operationally and measuring outcome criteria, by collecting judgments of
outcomes form stakeholders, and by performing both qualitative and quantitative analyses.
Relation to decision making in the change process
– For deciding to continue, terminate, modify, or refocus a change activity, and to present a
clear record of effects (intended and unintended, positive and negative)
9. The UCLA EvaluationModel
CIPP Model. Alkin defined evaluation as “the process of ascertaining the decision
areas of concern, selecting appropriate information, and collecting and analyzing
information in order to report summary data useful to decision-makers in selecting
among alternatives”. Alkin’s model included the following five types of evaluation:
1. Systems Assessments, to provide information bout the state of the system
(similar to context evaluation I the CIPP model)
2. Program planning, to assist in the selection of particular program likely to be
effective in meeting specific educational needs (similar to input evaluation)
3. Program implementation, to provide information about whether a program was
introduced to the appropriate group in the manner intended.
4. Program improvement, to provide information about how a program is
functioning, whether interim objectives are being achieve, and whether
unanticipated outcomes, are appearing (similar to process evaluation).
5. Program certification, to provide information about the value of the program
and its potential for use elsewhere (similar to product evaluation)
10. Four assumptions about evaluation:
1. Evaluation is a process of gathering information
2. The information collected in an evaluation will
be used mainly to make decisions about
alternative courses of action.
3. Evaluation information should be presented to
the decision maker in a form that he can use
effectively and that is designed to help rather
than confuse or mislead him.
4. Different kinds of decisions require different
kinds of evaluation procedures.
11. Growth and Development of the Early Models
CIP and UCLA frameworks for evaluation appear to be linear and sequential, but the developers
have stressed that such is not the case. For example, the evaluator would not have to complete
an input evaluation or a systems assessment in order to undertake one of the other types of
evaluation listed in the framework. Often evaluators may undertake “retrospective” evaluations
(such a context evaluation or a systems assessment) in preparation for a process or program
improvement evaluation study, believing this evaluation approach is cumulative, linear, and
sequential; however, such steps are not always necessary. A process evaluation can be done
without having completed context or input evaluation studies. The evaluator my cycle into
another types of evaluation if some decisions suggest that earlier decision should be reviewed.
Such is the nature of management-oriented evaluation.
CIPP model has produced guides for types of evaluation included in the framework. For example,
Stuffbeam (1977) advance the procedure for conducting a context evaluation with his guidelines
for designing a needs assessment for an education program or activity.
In input evaluation was developed by REinhard (1972). The input evaluatin approach that she
developed is called the advocate team technique. It is used when acceptable alternatives for
designing a new program are not available or obvious. The technique creates alternative new
dsigns that are then evaluated she selected, adapted, or combine to create the most viable
alternative design for a new program. This technique has been used successfully by the federal
government (Reinhard, 1972) and by school districts (Sanders, 1982) to generate options and
guide the final design of educational programs. Procedures proposed by Cronbach (1963)
provided useful suggestions for the conduct of process evaluation.
12. OTHERMANGEMENT-ORIENTED EVALUATION
APPROACHES
Provus’s Discrepancy Evaluation Model was described as an objectives-oriented
evaluation model. Some aspects of that model are also directed toward serving the
information needs of educational program managers. It is system-oriented, focusing
on input, process, and output at each of five stages of evaluation: program definition,
program installation, program process, program products, and cost-benefit analysis.
UCLA evaluation model with respect to their sensitivity to the various decisions
managers need to make at each stage of program development. Likewise, the systems
approached of logic models and program theory focus on inputs, processes, and
outcomes, as do the management-oriented evaluation approaches.
The utilization-focused evaluation approach of Patton (1986, 1996) could also be
viewed as a decision-making approach in one respect. He stressed that the process of
identifying and organizing relevant decision makers and information users is the first
step in evaluation. In his view, the use of evaluation findings requires that decision
makers determine what information is needed by various people and arrange for that
information to be collected and provided to them.
Wholey (1983, 1994) could also be considered a proponent of management-oriented
evaluation, given his focus on working with managements. His writings have
concentrated on the practical uses of evaluation in pubic administration settings.
13. How theManagement-OrientedEvaluation Approach Has Been
used
CIPP model has been used in school districts and state and federal
government agencies. The Dallas (Texas) Independent School District,
for example, established an evaluation office organized around the
four types of evaluation in the model.
The management-oriented approach to evaluation has guided program
managers through program planning, operation, and the review.
Program staff has found this approach a useful guide to program
improvement.
Evaluation approach has also been used for accountability purposes. It
provides a record-keeping framework that facilitates public review of
client needs, objectives, plans, activities, and outcomes.
Administrators and boards have found this approach useful in meeting
public demands for information. Stufflebeam and Shinkfield (1985)
described these two uses for the CIPP model as shown in Figure.
14. The Relevance of Four EvaluationTypes of Decision Making and
Accountability
Decision Making (Formative
Orientation)
Accountability (Summative
Orientation)
Context
Input
Process
Product
Guidance for choice of objectives and
assignment of priorities
Guidance for choice of program
strategy: input for specification of
procedural design
Guidance for implementation
Guidance for termination, continuation,
modification, or installation
Record of objectives and bases for
their choice along with a record of
needs, opportunities and problems
Record of chosen strategy and design
and reason for their choice over other
alternatives
Record of the actual process
Record of attainments and recycling
decisions
15. STRENGHTS AND LIMITATIONS OF THE MANAGEMENT ORIENTED
EVALUATION APPROACH
This approach has proved appealing to many evaluators and program managers,
particularly those at home with the rational and orderly systems approach, to which it
is clearly related. Its greatest strength is that it gives focus to the evaluation.
Experienced evaluators know hoe tempting it is simply to cast a wide net, collecting an
enormous amount of information, only later to discard much of it because it is not
directly relevant to the key issues or questions the evaluation must address. Focusing
on the informational needs and pending decisions of managers limits the range of
relevant data and brings the evaluation into sharp focus. This evaluation approach
also stresses the importance of the utility of information. Connecting decision making
and evaluation underscore the very purpose of evaluation. Also focusing an evaluation
on the decisions managers must make prevents the evaluator form pursuing unfruitful
lines of inquiry that are not of interest to the decision makers.
The management-oriented approach to evaluation was instrumental in showing
evaluators and program managers that they need not wait until an activity or program
has run its course before evaluating it. Educators can begin evaluating even when
ideas from programs are first discussed. Because of lost opportunities and heavy
resource investment, evaluation is generally least affective at the end o f a developing
program.
16. Management-oriented evaluation approach is probably the preferred choice
in the eyes of most managers and boards. This approach places on
information for decision makers. This approach addressed one of the biggest
criticisms of evaluation in the 1960s: that it did not provide useful
information.
CIPP model, a useful and simple heuristic tool that helps the evaluator
generate potentially important questions to be addressed in an evaluation.
Context, input, process and product, the evaluator can identify a number of
questions about an undertaking. The model and the questions it generates
also make the evaluation easy to explain to lay audiences.
Management-oriented approach to evaluation supports evaluation of every
component of a program as it operates, grows, or changes. It stresses the
timely use of feedback by decision makers so that the program is not left to
flounder or proceed unaffected by updated knowledge about needs,
resources, new developments, the realities of day-to-day operations, or the
consequences of program interventions.
A potential weakness of this approach is the evaluator’s occasional inability to
respond to questions or issues that may be significant – even critical – but
that clash with or at least do not match the concern and questions of the
decision maker who, essentially, controls the evaluation. In addition,
programs that lack decisive leadership are not likely to benefit from this
approach to evaluation.
17. Potential weakness of management-oriented evaluation is the
preference it seems to give to top management. The evaluator
can become the “hired gun” of the manager and program
establishment. Potential weakness of the management-oriented
approach establishment. Potential weakness of the management
–oriented approach is the possibility that the evaluation can
become unfair and possibly even undemocratic stakeholders
who have less power and resources (House and Howe, 1999).
Policy-shaping community. The policy-shaping community includes:
• Public servants, such as responsible officials at the policy and
program levels and the actual operating personnel.
• The public, consisting not only of constituents but also influential
persons such as commentators, academic social scientists,
philosophers, gadflies, and even novelists or dramatists.
18. Few policy studies have been found to have a direct effect on the policy-
shaping community, but evaluations can and do incluence these audiences
over time. Poly as a reflection of public values, may be seen as never-ending
in that it continues to be molded or revised as issues, reforms, social causes,
and social values change or come to the forefront of attention. One
important role fo the evaluator is to illuminate, not to dictate, the decision.
Helping clients to understand the complexity of issues, not to give simple
answers to narrow questions, is a role of evaluation.
Limitations is it followed in ita entirety, the management-oriented approach
can result in costly and complex evaluations. If priorities are not carefully set
and followed, the many questions to be addressed using a management-
oriented approach can clamor for attention, leading to an evaluation system
as large as the program itself and diverting resources from program activities.
In planning evaluation procedures, management-oriented evaluators need to
consider the resources and time available. If the management-oriented
approach requires more time or resources than are available, another
approach may have to be considered.
19. As a case in point, consider the program manager who has to make
decision about next week’s production schedule. This manager may be
able to use only the CIPP or UCLA models informally, as an armchair
aid. The management-oriented evaluator needs to be realistic about
what work is possible and must not promise more than can be
delivered.
This evaluation approach assumes that the important decisions can be
clearly identified in advance. That clear decision alternatives can be
specified, and that the decisions to be served remain reasonably stable
while the evaluation is being done. All of these assumptions about the
orderliness and predictability of the decision-making process are
suspect and frequently unwarranted. Frequent adjustments may be
needed in the original evaluation plan if this approach is to work well.
20. MAJOR CONCEPTS AND THEORIES
1. The major impetus behind the management-oriented approach to evaluation is to inform
decision makers about eh inputs, processes, and outputs of the program under evaluation. This
approach considers the decision maker’s concerns, information needs and criteria for
effectiveness when developing the evaluation.
2. Stufflbeam’s CIPP evaluation model incorporates four separate evaluation (i.e., context, input,
process, and product) into one framework to better serve mangers, and decision makers. Each
of these evaluations collects data to serve different decisions (e.g., context evaluation serve
planning decisions) by progressing through a series of evaluation steps that provide structure
to the evaluation.
3. In the CIPP model, a context evaluation helps define objectives for the program under
evaluating.
4. To facilitate the design of program procedures, the CIPP model’s input evaluation provides
information on what resources are available, what alternative strategies to the program should
be considered, and what plans will best meet the needs of the program.
5. A process evaluation is used in the CIPP model to determine how well a program is being
implemented, what barriers to success exist, and what program revisions may be needed.
6. Product evaluation is used in the CIPP model to provide information on what program results
were obtained, how well needs were reduced, and what should be done once the program has
ended.
7. Alkin’s UCLA model is similar to the CIPP model in that it provides decision makers information
of the context, inputs, implementations, processes, and products of the program under
evaluation.