Your SlideShare is downloading. ×
0
Street Jibe Evaluation Workshop 2
Street Jibe Evaluation Workshop 2
Street Jibe Evaluation Workshop 2
Street Jibe Evaluation Workshop 2
Street Jibe Evaluation Workshop 2
Street Jibe Evaluation Workshop 2
Street Jibe Evaluation Workshop 2
Street Jibe Evaluation Workshop 2
Street Jibe Evaluation Workshop 2
Street Jibe Evaluation Workshop 2
Street Jibe Evaluation Workshop 2
Street Jibe Evaluation Workshop 2
Street Jibe Evaluation Workshop 2
Street Jibe Evaluation Workshop 2
Street Jibe Evaluation Workshop 2
Street Jibe Evaluation Workshop 2
Street Jibe Evaluation Workshop 2
Street Jibe Evaluation Workshop 2
Street Jibe Evaluation Workshop 2
Street Jibe Evaluation Workshop 2
Street Jibe Evaluation Workshop 2
Street Jibe Evaluation Workshop 2
Street Jibe Evaluation Workshop 2
Street Jibe Evaluation Workshop 2
Street Jibe Evaluation Workshop 2
Street Jibe Evaluation Workshop 2
Street Jibe Evaluation Workshop 2
Street Jibe Evaluation Workshop 2
Street Jibe Evaluation Workshop 2
Street Jibe Evaluation Workshop 2
Street Jibe Evaluation Workshop 2
Street Jibe Evaluation Workshop 2
Street Jibe Evaluation Workshop 2
Street Jibe Evaluation Workshop 2
Street Jibe Evaluation Workshop 2
Street Jibe Evaluation Workshop 2
Street Jibe Evaluation Workshop 2
Street Jibe Evaluation Workshop 2
Street Jibe Evaluation Workshop 2
Street Jibe Evaluation Workshop 2
Street Jibe Evaluation Workshop 2
Street Jibe Evaluation Workshop 2
Street Jibe Evaluation Workshop 2
Street Jibe Evaluation Workshop 2
Street Jibe Evaluation Workshop 2
Street Jibe Evaluation Workshop 2
Street Jibe Evaluation Workshop 2
Street Jibe Evaluation Workshop 2
Street Jibe Evaluation Workshop 2
Street Jibe Evaluation Workshop 2
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Street Jibe Evaluation Workshop 2

763

Published on

Dr. Uzo Anucha - Workshop presentation -Streetjibe - Thinking Critically to Improve Program Effectiveness

Dr. Uzo Anucha - Workshop presentation -Streetjibe - Thinking Critically to Improve Program Effectiveness

Published in: Education, Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
763
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
14
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide
  • Transcript

    • 1. A Conversation about Program Evaluation: Why, How and When? Uzo Anucha, MSW; PhD Associate Professor – School of Social Work Director – Applied Social Welfare Research and Evaluation Group York University
    • 2. Presentation Outline
      • Setting the Context for our Program Evaluation Work
        • Our Evaluation Principles…..
        • Why Evaluate?
        • Who is An Evaluation For?
        • Types of Evaluation
      • Outcome Evaluation
      • Planning a Program Evaluation
        • Engage Stakeholders
        • Focus the Evaluation
        • Collect Data
        • Analyze & Interpret
        • Use the Information
      • Ready, Set, Go? Some Things to Consider…..
    • 3. Setting the Context for our Program Evaluation Work
    • 4. Our Evaluation Principles …..
      • We are committed to the following principles/values in our evaluation work
      • Strengthen projects
      • Use multiple approaches
      • Design evaluation to address real issues
      • Create a participatory process
      • Allow for flexibility
      • Build capacity
      • (W.K. Kellogg Foundation Evaluation Handbook, 1998)
    • 5. Our Evaluation Approach….
      • A Critical Approach: Question the questions.
      • Some questions to consider:
        • How does this program work?
        • Why has it worked or not worked? For whom and in what circumstances?
        • What was the process of development and implementation?
        • What were the stumbling blocks faced along the way?
        • What do the experiences mean to the people involved?
        • How do these meanings relate to intended outcomes?
        • What lessons have we learned about developing and implementing this program?
        • How have contextual factors impacted the development, implementation, success, and stumbling blocks of this program?
        • What are the hard-to-measure impacts of this program (ones that cannot be easily quantified)? How can we begin to effectively document these impacts?
      • (W.K. Kellogg Foundation Evaluation Handbook, 1998)
    • 6. Our Evaluation Approach….
      • We acknowledge the influence of paradigms, politics, and values and are willing to deal with these by:
        • Getting ‘inside’ the project
        • Creating an environment where all stakeholders are encouraged to discus their values and philosophies
        • Challenging our assumptions
        • Asking stakeholders for their perspectives on particular issues
        • Listening
        • Remembering there may be multiple “right” answers
        • Maintain regular contact and provide feedback to stakeholders
        • Designing specific strategies to air differences and grievances.
        • Make the evaluation and its findings useful and accessible. Early feedback and a consultative relationship with stakeholders and project staff leads to a greater willingness by staff to disclose important and sensitive information
        • Sensitivity to the feelings and rights of individuals.
        • Create an atmosphere of openness to findings, with a commitment to considering change and a willingness to learn.
        • (W.K. Kellogg Foundation Evaluation Handbook, 1998)
    • 7. What is Not Program Evaluation? What is Program Evaluation?
    • 8.
      • Program evaluation is not an assessment of individual staff performance. The purpose is to gain an overall understanding of the functioning of a program.
      • Program evaluation is not an audit – evaluation does not focus on compliance with laws and regulations.
      • Program evaluation is not research . It is a pragmatic way to learn about a program.
      What is Not Program Evaluation?
    • 9.
      • Program evaluation is not one method . It can involve a range of techniques for gathering information to answer questions about a program.
        • Most programs already collect a lot of information that can be used for evaluation. Data collection for program evaluation can be incorporated in the ongoing record keeping of the program.
      What is Not Program Evaluation?
    • 10.
      • Program evaluation means taking a systematic approach to asking and answering questions about a program.
        • Program evaluation is a collection of methods, skills and sensitivities necessary to determine whether a human service is needed and likely to be used , whether it is sufficiently intensive to meet the unmet needs identified, whether the service is offered as planned , and whether the human service actually does help people in need at reasonable cost without undesirable side effects (Posavac & Carey, 2003. p.2)
      What is Program Evaluation?
    • 11. Why Evaluate?
    • 12.
      • Verify that resources are devoted to meeting unmet needs
      • Verify that planned programs do provide services
      • Examine the results
      • Determine which services produce the best results
      • Select the programs that offer the most needed types of services
      Why Evaluate?
    • 13.
      • Provide information needed to maintain and improve quality
      • Watch for unplanned side effects
      • Create program documentation
      • Help to better allocate program resources
      • Assist staff in program development and improvement
      Why Evaluate?
    • 14. Evaluation can….
      • Increase our knowledge base
      • Guide decision making
        • Policymakers
        • Administrators
        • Practitioners
        • Funders
        • General public
        • Clients
      • Demonstrate accountability
      • Assure that client objectives are being achieved
    • 15. Who is an evaluation for?
    • 16.
      • What do they want to know?
      • What do we want to tell them about the program?
      • How can they contribute to the evaluation?
      • Program participants?
      • Family members and caregivers?
      • Program staff?
      • Volunteers?
      • Partner agencies and professionals?
      • Referral sources?
      • Funders?
      • Others?
      Who is an evaluation for?
    • 17. Types of Evaluation….
    • 18. Types of evaluations
      • Needs assessment
      • Evaluability assessment
      • Process evaluation
      • Outcome evaluation
      • Efficiency evaluation (cost evaluation)
    • 19. Process Evaluation….
    • 20. Process Evaluation
      • Sometimes referred to as “formative evaluation”
      • Documents and analyzes how a program works and identifies key factors that influence the operation of the program.
      • Allows for a careful description of a program’s actual implementation and services therefore facilitating the replication of the program.
      • Emphasis is on describing activities and characteristics of clients and workers.
      • Allows for an investigation of whether services are delivered in accordance with program design and makes it possible to study the critical ingredients of a model.
    • 21. Process Evaluation
      • Findings of a process evaluation are critical in shaping further development of a program’s services and assists in explaining why program objectives are (or are not) being met.
      • Focuses on verifying program implementation… looks at the approach to client service delivery...day to day operations
      • Two major elements:
        • 1) how a program’s services are delivered to clients (what worker’s do including frequency and intensity; client characteristics; satisfaction
        • 2) administrative mechanisms to support these services (qualifications; structures; hours; support services; supervision; training)
    • 22. Process Evaluation:
      • Examples of Process Evaluation Questions:
        • Is the program attracting a sufficient number of clients?
        • Are clients representative of the target population?
        • How much does the staff actually contact the client?
        • Does the workload of staff match that planned?
        • Are there differences in effort among staff?
    • 23. Outcome Evaluation….
    • 24. Outcome Evaluation
      • Outcomes are benefits or changes for individuals or populations during or after participating in program activities. Outcomes may relate to behavior, skills, knowledge, attitudes, values, condition, or other attributes.
      • They are what participants know, think, or can do; or how they behave; or what their condition is, that is different following the program.
      • Outcome evaluation helps us to demonstrate the nature of change that took place
    • 25. Outcome Evaluation
      • Outcome evaluation tests hypotheses about how we believe that clients will change after a period of time in our program.
      • Evaluation findings are specific to a specific group of clients experiencing the specific condition of one specific program over a specific time frame at a specific time.
    • 26. For example:
      • A program to counsel families on financial management, outputs--what the service produces--include the number of financial planning sessions and the number of families seen. The desired outcomes--the changes sought in participants' behavior or status- -can include their developing and living within a budget, making monthly additions to a savings account, and having increased financial stability.
    • 27. Uses of Outcome Evaluation
      • Improving program services to clients
      • Generating knowledge for the profession
      • Estimating costs
      • Demonstrate nature of change...evaluation of program objectives e.g. what we expect clients to achieve
      • Guide major program decisions and program activities
    • 28. Outcome Evaluation
      • Describe program effects
        • Is the desired outcome observed?
        • Are program participants better off than non-participants?
        • Is there evidence that the program caused the observed changes?
        • Is there support for the theoretical foundations underpinning the program?
        • Is there evidence that the program could be implemented successfully elsewhere?
    • 29. Program-Level Evaluations
      • Program level evaluations vary on a continuum and are fundamentally made up of three levels
        • Exploratory
        • Descriptive
        • Explanatory
    • 30. Program-Level Evaluations
      • Program level evaluations vary on a continuum and are fundamentally made up of three levels
        • Exploratory
        • Descriptive
        • Explanatory
    • 31. Exploratory Outcome Evaluation Designs
      • Questions here include:
        • Did the participants meet a criterion (e.g. Treated vs. Untreated)?
        • Did the participants improve (e.g. appropriate direction)?
        • Did the participants improve enough (e.g. statistical vs. meaningful difference)?
        • Is there a relation between change and service intensity and participant characteristics?
    • 32. Exploratory Designs
      • One group post test only
      • Multi-group post test only
      • Longitudinal case study
      • Longitudinal survey
    • 33. Strengths of Exploratory Designs
      • Less intrusive and inexpensive
      • Assess the usefulness and feasibility of further evaluations
      • Can correlate improvement with other variables.
    • 34. Descriptive Designs
      • To show that something causes something else, it is necessary to demonstrate:
        • That the cause precedes the supposed effects in time e.g. that an intervention precedes the change
        • That the cause covaries with the effect – the change covaries with the intervention – the more the intervention, the more the change.
        • That no viable explanation of the effect can be found except for the assumed cause e.g. there can be no other explanation for the change except the intervention.
      • Both 1 and 2 can be achieved with exploratory designs…but not 3.
    • 35. Descriptive Designs
      • Randomized one-group posttest only
      • Randomized cross-sectional and longitudinal survey
      • One-group pretest-posttest
      • Comparison group posttest only
      • Comparison group pretest-posttest
      • Interrupted time series 
    • 36. Explanatory Designs
      • Defining characteristic is observation of people randomly assigned to either a program or control condition .
      • Considered much better at addressing threats to internal validity
      • Program group vs. Control group: if groups are formed randomly there is no reason to believe they differ in rate of maturation; no self selection into groups; groups did not begin at different levels
    • 37. Explanatory Designs
      • Classical experimental
      • Solomon four group
      • Randomized posttest only control group
    • 38. Explanatory Designs
      • Strengths/Limitations:
        • counter threats to internal validity
        • allow interpretations of causation
        • expensive and difficult to implement
        • frequently resistance from practitioners who already know what is best
      •   Suggested Times to Use:
        • when new program is introduced
        • when stakes are high
        • when there is controversy over efficacy
        • when policy change is desired
        • when program demand is high
    • 39. Planning a Program Evaluation
    • 40. Planning a Program Evaluation
      • Engage Stakeholders
      • Focus the Evaluation
      • Collect Data
      • Analyze & Interpret
      • Use the Information
    • 41. Engage Stakeholders
      • Who should be involved?
      • How might they be engaged?
        • Identify & meet with stakeholders – program director, staff, funders/program sponsors and clients/program participants.
    • 42. Focus the Evaluation
      • What are you going to evaluate? (Describe program logic model/theory of change)
      • What is the evaluability of the program?
      • What is the purpose of the evaluation?
      • Who will use the evaluation? How will they use it?
      • What questions will the evaluation seek to answer?
      • What information do you need to answer the questions?
      • When is the evaluation needed?
      • What evaluation will you use?
    • 43. Collect Data
      • What sources of information will you use?
        • Intended beneficiaries of the program (program participants, artifacts, community indexes)
        • Providers of service (program staff, program records)
        • Observers (expert observers, trained observers, significant others, evaluation staff)
      • What data collection method (s) will you use?
      • When will you collect data for each method you’ve chosen?
    • 44. Analyze & Interpret
      • How will the data be analyzed?
        • Data analysis methods
        • Who is responsible
      • How will the information be interpreted – by whom?
      • What did you learn?
      • What are the limitations?
    • 45. Use the Information
      • How will the evaluation be communicated and shared?
        • To whom?
        • When?
        • Where?
        • How to present?
      • Next steps
    • 46. Ready, Set, Go? Some things to consider…..
    • 47. StreetJibe: Summary of Process and Outcome Evaluation Questions
    • 48. Things to Consider…..
      • Planning an evaluation follows similar steps to the conduct of more basic research with some additional considerations
      • More effort needs to be expended in engaging and negotiating with stakeholder groups
      • There needs to be a keener awareness of the social/political context of the evaluation (e.g. differing and competing interests)
    • 49. Important to consider…
      • Internal or external evaluators?
      • Scope of evaluation?
        • Boundary
        • Size
        • Duration
        • Complexity
        • Clarity and time span of program objectives
        • Innovativeness
    • 50. Challenging Attitudes toward Program Evaluation…….
      • Expectations of slam-bang effects
      • Assessing program quality is unprofessional
      • Evaluation might inhibit innovation
      • Program will be terminated
      • Information will be misused
      • Qualitative understanding might be lost
      • Evaluation drains resources
      • Loss of program control
      • Evaluation has little impact

    ×