Evaluation design
Upcoming SlideShare
Loading in...5
×
 

Evaluation design

on

  • 1,048 views

the methods of evaluation design.

the methods of evaluation design.

Statistics

Views

Total Views
1,048
Views on SlideShare
1,048
Embed Views
0

Actions

Likes
0
Downloads
14
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • The use of interviews as a data collection method begins with the assumption that the participants’ perspectives are meaningful, knowable, and able to be made explicit, and that their perspectives affect the success of the project.
  • The technique inherently allows observation of group dynamics, discussion, and firsthand insights into the respondents’ behaviors, attitudes, language, etc.

Evaluation design Evaluation design Presentation Transcript

  • Evaluation DesignCurriculum Evaluation (EDU 5352) Mina Badiei (GS31016)
  • What Is Evaluation Design?• The plan for an evaluation project is called a "design“.• It is a particularly vital step to provide an appropriate assessment.• A good design offers an opportunity to maximize the quality of the evaluation, helps minimize and justify the time and cost necessary to perform the work.
  • Design Process• 1. Identifying evaluation questions and issues• 2. Identifying research designs and comparisons• 3. Sampling methods• 4. Data collection instruments• 5. Collecting and coding qualitative data
  • Evaluation Design Approaches Mixed-Quantitative Qualitative Method
  • Quantitative approach• Quantitative data can be counted, measured, and reported in numerical form and answer questions such as who, what, where, and how much.• The quantitative approach is useful for describing concrete phenomena and for statistically analyzing results.• Data collection instruments can be used with large numbers of study participants.• Data collection instruments can be standardized, allowing for easy comparison within and across studies.
  • • Experimental designs tend to be rigorous in that they control for external factors and enable you to argue, with some degree of confidence, that your findings are due to the Experimental effects of the program rather than other, unrelated, factors. • They are rarely applicable in educational settings where there is a chance that students may be denied an opportunity to participate in a program because of the evaluation design. • Quasi-experimental designs are those in which participants are matched beforehand, or after the fact, using statistical Quasi- methods • These studies offer a reasonable solution for schools or Experimental districts that cannot randomly assign students to different programs, but still desire some degree of control so that they can make statistical statements about their findings. • it is intended to demonstrate trends or changes over time.Time-series Study • the purpose of the design is not to examine the impact of an intervention ,but simply to explore and describe changes in the construct of interest. • Time series offer more data points, but because there is little control over extraneous factors.
  • • it is intended to show a snapshot in time. • it might be used to answer questions like:Cross-sectional • -what do parents think about our school? • -what do parents see as the strengths and weaknesses the school environment? • case studies are those which seek to follow program implementation or impact on an individual, group, or organization, such as a school or classroom.Case-studies • However, case studies are an excellent way to collect evidence of program effectiveness, to increase understanding of how an intervention is working in particular settings, and to inform a larger study to be conducted later.
  • Types of Experimental DesignPost-test only design Pre-post design• the least complicated of the • it is employed when a experimental design. pretreatment measure can• it has three steps:1) decide what comparisons are supply useful information.desired and meaningful. • in Field-trial stage this2) the students in two or more design in common to use.comparison groups must besimilar.3) collect the information afterthe posttest to determinewhether the differencesoccurred.
  • Example:• Equilibrium Effects of Education Policies: A Quantitative EvaluationBy : Giovanni Gallipoli, Costas Meghir, Giovanni L. ViolanteThe paper compares partial and general equilibrium effects of alternative educationpolicies on the distribution of education and earnings. The numerical counterpart ofthe model, parameterized through a variety of data sources, education enrollmentresponses which are broadly in line with reduce-form estimates. Through numericalsimulations, they compare the effects of alternative policy interventions on optimaleducation decisions, inequality, and output. It’s a king of Quasi-Experimental design.
  • Qualitative Approach• Qualitative data are reported in narrative form.• Qualitative approach can provide important insights into how well a program is working and what can be done to increase its impact.• Qualitative data can also provide information about how participants – including the people responsible for operating the program as well as the target audience – feel about the program.• It promotes understanding of diverse stakeholder perspectives (e.g., what the program means to different people).• Stakeholders, funders, policymakers, and the public may find quotes and anecdotes easier to understand and more appealing than statistical data.
  • ObservationInterviewFocus GroupsDocument StudiesKey-informants
  • observation• Observational techniques are methods by which an individual or individuals gather firsthand data on programs, processes, or behaviors being studied.• They provide evaluators with an opportunity to collect data on a wide range of behaviors, to capture a great variety of interactions, and to openly explore the evaluation topic.• By directly observing operations and activities, the evaluator can develop a holistic perspective, i.e., an understanding of the context within which the project operates.• Observational approaches also allow the evaluator to learn about things the participants or staff may be unaware of or that they are unwilling or unable to discuss in an interview or focus group.
  • When to use observations• Observations can be useful during both the formative and summative phases of evaluation. For example, during the formative phase, observations can be useful in determining whether or not the project is being delivered and operated as planned.• In the hypothetical project, observations could be used to describe the faculty development sessions, examining the extent to which participants understand the concepts, ask the right questions, and are engaged in appropriate interactions.• Observations during the summative phase of evaluation can be used to determine whether or not the project is successful. The technique would be especially useful in directly examining teaching methods employed by the faculty in their own classes after program participation.
  • Interviews• Interviews provide very different data from observations: they allow the evaluation team to capture the perspectives of project participants, staff, and others associated with the project.• In the hypothetical example, interviews with project staff can provide information on the early stages of the implementation and problems encountered.• An interview, rather than a paper and pencil survey, is selected when interpersonal contact is important and when opportunities for follow up of interesting comments are desired.• Two types of interviews are used in evaluation research: structured interviews, in which a carefully worded questionnaire is administered; and in depth interviews, in which the interviewer does not follow a rigid form.
  • Contd..structured interviews: In-depth interviews: • the interviewers seek to encourage• the emphasis is on obtaining free and open responses, and answers to carefully phrased there may be a trade off between questions. comprehensive coverage of topics• Interviewers are trained to and in depth and limited set of deviate only minimally from the questions. question wording to ensure • In depth interviews also encourage uniformity of interview capturing of respondents’ administration. perceptions in their own words. This allows the evaluator to present the meaningfulness of the experience from the respondent’s perspective. • In depth interviews are conducted with individuals or with a small group of individuals.
  • When to use interviews• interviews can be used at any stage of the evaluation process. They are especially useful in answering questions such as those suggested by Patton (1990):• What does the program look and feel like to the participants? To other stakeholders?• What are the experiences of program participants?• What do stakeholders know about the project?• What thoughts do stakeholders knowledgeable about the program have concerning program operations, processes, and outcomes?• What are participants’ and stakeholders’ expectations?• What features of the project are most salient to the participants?• What changes do participants perceive in themselves as a result of their involvement in the project?
  • Focus Groups• Focus groups combine elements of both interviewing and participant observation.• The focus group session is, indeed, an interview not a discussion group, problem-solving session, or decision- making group. (Patton, 1990)• The hallmark of focus groups is the explicit use of the group interaction to generate data and insights that would be unlikely to emerge without the interaction found in a group.• Focus groups are a gathering of 8 to 12 people who share some characteristics relevant to the evaluation. Originally used as a market research tool to investigate the appeal of various products.
  • Contd.• the focus group technique has been adopted by other fields, such as education, as a tool for data gathering on a given topic.• It conducted by experts take place in a focus group facility that includes recording apparatus (audio and/or visual) and an attached room with a one-way mirror for observation. There is an official recorder who may or may not be in the room.• Participants are paid for attendance and provided with refreshments.
  • When to use focus groups• When conducting evaluations, focus groups are useful in answering the same type of questions as in-depth interviews, except in a social context.• Specific applications of the focus group method in evaluations include : - identifying and defining problems in projectimplementation; -identifying project strengths, weaknesses, andrecommendations; -assisting with interpretation of quantitative findings; -obtaining perceptions of project outcomes andimpacts; and generating new ideas.
  • Other Qualitative Methods• Document Studies: defined a document as "any written or recorded material" not prepared for the purposes of the evaluation or at the request of the inquirer. Documents can be divided into two major categories: public records, and personal documents (Guba and Lincoln, 1981).• Key Informant :A key informant is a person (or group of persons) who has unique skills or professional background related to the issue/intervention being evaluated, is knowledgeable about the project participants, or has access to other information of interest to the evaluator.• Key informants can help the evaluation team better understand the issue being evaluated, as well as the project participants, their backgrounds, behaviors, and attitudes, and any language or ethnic considerations. They can offer expertise beyond the evaluation team. They are also very useful for assisting with the evaluation of curricula and other educational materials. Key informants can be surveyed or interviewed individually or through focus groups.
  • Example:• A Qualitative Evaluation Process for Educational Programs Serving Handicapped Students in Rural AreasBy : LUCILLE ANNESEZEPHThe paper describes a qualitative methodology designed to evaluate special education programsin rural areas serving students with severe special needs. A rationale is provided for the use ofthe elements of aesthetic criticism as the basis of methodology, and specific descriptions of thesteps for its implementation and validation are provided. Some practical limitations and particularareas of usefulness are also discussed.
  • Mix method• In recent years evaluators of educational and social programs have expanded their methodological repertoire with designs that include the use of both qualitative and quantitative methods. Such practice, however, needs to be grounded in a theory that can meaningfully guide the design and implementation of mixed- method evaluations.• In many cases a mixture of designs can work together as a design for evaluating a large, complex program.• The ideal evaluation combines quantitative and qualitative methods. A mixed-method approach offers a range of perspectives on a programs processes and outcomes.• For example, the impact of a reading intervention on student performance may be compared for all students in a school over a period of time using repeated measures from exams administered for this purpose, but could also include more focused case studies of particular classes to learn about crucial implementation issues.
  • benefits• It increases the validity of your findings by allowing you to examine the same phenomenon in different ways.• It can result in better data collection instruments. For example, focus groups can be invaluable in the development or selection of a questionnaire used to gather quantitative data.• It promotes greater understanding of the findings. Quantitative data can show that change occurred and how much change took place, while qualitative data can help you and others understand what happened and why.• It offers something for everyone. Some stakeholders may respond more favorably to a presentation featuring charts and graphs. Others may prefer anecdotes and stories.
  • Example:• A Mixed Methods Evaluation of a 12-Week Insurance-Sponsored Weight Management Program Incorporating Cognitive–Behavioral CounselingBy:Christiaan Abildso , Sam Zizzi, Diana Gilleland, James Thomas, and Daniel BonnerA sequential mixed methods approach was used to assess the physicaland psychosocial impact of a 12-week cognitive–behavioral weightmanagement program and explore factors associated with weight loss.Quantitative data revealed program completion rate and meanpercentage weight loss that compare favorably with otherinterventions, and differential psychosocial impacts on those losingmore weight. Telephone interviews revealed four potentialmechanisms for these differential impacts: (a) fosteringaccountability, (b) balancing perceived effort and success, (c)redefining ‘‘success,’’ and (d) developing cognitive flexibility.