Program evaluation and outdoor education: An overview

5,480 views

Published on

This presentation discusses program evaluation in outdoor education. What is it? Why do it? What methods are there? How can data be analysed? How can results be used? We will consider several example program evaluation studies and available tools and resources. There will also be opportunity to workshop your own program evaluation needs.

Main presentation page: http://wilderdom.com/wiki/Neill_2010_Program_evaluation_and_outdoor_education:_An_overview

Published in: Education, Technology
0 Comments
4 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
5,480
On SlideShare
0
From Embeds
0
Number of Embeds
17
Actions
Shares
0
Downloads
101
Comments
0
Likes
4
Embeds 0
No embeds

No notes for slide
  • Session abstract
    This session will discuss program evaluation in outdoor education. What is it? Why do it? What methods are there? How can data be analysed? How can results be used? We will consider several example program evaluation studies and available tools and resources. There will also be opportunity to workshop your own program evaluation needs.
    More info: http://wilderdom.com/wiki/Category:NOEC/2010
    Biography
    James currently lectures in the Centre for Applied Psychology at the University of Canberra and conducts research in outdoor education, experiential learning, the natural environment, and new technologies. He also edits . James previously taught outdoor education at the University of New Hampshire and was the Research Coordinator and a senior instructor at Outward Bound Australia.
    More info: http://wilderdom.com/JamesNeill.htm
  • Image source: http://commons.wikimedia.org/wiki/File:Information_icon4.svg
    License: Public domain
    Image source: http://commons.wikimedia.org/wiki/File:Autoroute_icone.svg
    License: CC-BY-A 2.5
    Author: http://commons.wikimedia.org/wiki/User:Doodledoo
  • Image source: http://commons.wikimedia.org/wiki/File:Information_icon4.svg
    License: Public domain
    Image source: http://commons.wikimedia.org/wiki/File:Autoroute_icone.svg
    License: CC-BY-A 2.5
    Author: http://commons.wikimedia.org/wiki/User:Doodledoo
  • Image name: * Blackboard *
    Image source: http://www.flickr.com/photos/8078381@N03/3279725831/
    Image author: pareeerica, http://www.flickr.com/people/8078381@N03/
    Image license:CC-by-A 2.0 http://creativecommons.org/licenses/by/2.0/deed.en
  • The necessity argument runs something like this:
    - If we don’t get in and start building up a systemic evaluation programs of our programs now, we will eventually (probably) be increasingly forced by outside providers to do so
    The moral argument runs something like this:
    If we purport to affect psychosocial aspects of people’s lives then we have a moral obligation to maximize in a rigorous, thorough way the processes and outcomes of our programs
    (if we teach canoeing badly, but the kids survive, does it matter? But if we teach about personal lives badly, we potentially muck around with and damage the core aspects of a person’s being? Therefore we are as morally obligated if not more so to be as thorough and rigorous in our educational design, our training of facilitators, and our evaluation of outcomes as we are in throughly researching appropriate safety procedures, etc.)
    Safety audits are now common – but educational auditing is not – they are becoming more common
    Noted that in Brookes’ examination of fatalities since the 1960’s in Australian outdoor education, staff were identified as one of the highest risks for fatalities because they are inclined to go outside of program policies and normal behaviors – likewise, whilst we discipline our students through systematic processes of feedback and reflection, we tend not to subject the educational quality of our programs to the same kind of rigor in analysis
  • We are all natural researchers and evaluators. E.g., we are always assessing other people and judging value
    If you are asked to buy new harnesses for a ropes course, you naturally “research” the answer the question.
    When it comes to assessing the quality of programming, do we systematically investigate?
  • Image name: * Blackboard *
    Image source: http://www.flickr.com/photos/8078381@N03/3279725831/
    Image author: pareeerica, http://www.flickr.com/people/8078381@N03/
    Image license:CC-by-A 2.0 http://creativecommons.org/licenses/by/2.0/deed.en
  • Source: Priest, S. (1999). National life cycles in outdoor adventure programming.  The Outdoor Network, 10 (1),16-17, 34-35.
    Priest (1999) – figure & table
  • The necessity argument runs something like this:
    - If we don’t get in and start building up a systemic evaluation programs of our programs now, we will eventually (probably) be increasingly forced by outside providers to do so
    The moral argument runs something like this:
    If we purport to affect psychosocial aspects of people’s lives then we have a moral obligation to maximize in a rigorous, thorough way the processes and outcomes of our programs
    (if we teach canoeing badly, but the kids survive, does it matter? But if we teach about personal lives badly, we potentially muck around with and damage the core aspects of a person’s being? Therefore we are as morally obligated if not more so to be as thorough and rigorous in our educational design, our training of facilitators, and our evaluation of outcomes as we are in throughly researching appropriate safety procedures, etc.)
    Safety audits are now common – but educational auditing is not – they are becoming more common
    Noted that in Brookes’ examination of fatalities since the 1960’s in Australian outdoor education, staff were identified as one of the highest risks for fatalities because they are inclined to go outside of program policies and normal behaviors – likewise, whilst we discipline our students through systematic processes of feedback and reflection, we tend not to subject the educational quality of our programs to the same kind of rigor in analysis
  • Progression through the 7 stages can start from any point and need not progress linearly (e.g., a change in leadership can cause a big leap up or down).
  • Other options: grants; minimalist “just do it” – just experiment with an end of program survey at the end of your next program
  • http://managementhelp.org/evaluatn/fnl_eval.htm#anchor1585345
  • Other options: grants; minimalist “just do it” – just experiment with an end of program survey at the end of your next program
  • Other options: grants; minimalist “just do it” – just experiment with an end of program survey at the end of your next program
  • Other options: grants; minimalist “just do it” – just experiment with an end of program survey at the end of your next program
  • Other options: grants; minimalist “just do it” – just experiment with an end of program survey at the end of your next program
  • Other options: grants; minimalist “just do it” – just experiment with an end of program survey at the end of your next program
  • Other options: grants; minimalist “just do it” – just experiment with an end of program survey at the end of your next program
  • Other options: grants; minimalist “just do it” – just experiment with an end of program survey at the end of your next program
  • http://managementhelp.org/evaluatn/fnl_eval.htm#anchor1585345
  • Other options: grants; minimalist “just do it” – just experiment with an end of program survey at the end of your next program
  • Other options: grants; minimalist “just do it” – just experiment with an end of program survey at the end of your next program
  • Program evaluation and outdoor education: An overview

    1. 1. 1 Program evaluation & outdoor education: An overview Dr James Neill Centre for Applied Psychology University of Canberra 16th National Outdoor Education Conference, Perth, Western Australia, Jan 10-13, 2010
    2. 2. http://wilderdom.com/wiki/Evaluation 2 Overview 1. Discuss program evaluation in outdoor education e.g., 1. What is it? 2. Why do it? 3. What methods are there? 4. How can data be analysed? 5. How can results be used?
    3. 3. http://wilderdom.com/wiki/Evaluation 3 Overview 2. Consider example program evaluation studies and available tools and resources. 3. Opportunity to workshop your own program evaluation needs.
    4. 4. 4 Resources: http://wilderdom.com/wiki/Evaluation http://wilderdom.com/wiki/Research Email: james.neill@canberra.edu.au Contacts & resources
    5. 5. 5 Resources
    6. 6. 6 Resources http://managementhelp.org/evaluatn/fnl_eval.htm
    7. 7. 7 What is it?
    8. 8. 8 What is evaluation? E-value-ation (a systematic process of determining value)
    9. 9. 9 Research vs. evaluation  Research and evaluation are ways of answering questions.  Research = aims to generalise findings to outside world  Evaluation = findings are specific and restricted (more info: Priest, 2001)
    10. 10. 10 Why do it?
    11. 11. 11 Outdoor education life cycle: Role of research & evaluation As the field matures, there is a trend towards more programs becoming involved in research and evaluation.
    12. 12. International Life Cycle (Priest, 1999) Source: Priest, S. (1999). National life cycles in outdoor adventure programming.   The Outdoor Network, 10 (1),16-17, 34-35.
    13. 13. 13 Organisational use of experiential learning cycle principles
    14. 14. 14 Why evaluate? Two main motivations:  NECCESITY we have to (for others)  MORALITY we want to (to improve/develop)
    15. 15. 15 Hierarchy of research/evaluation motivations -1. Intentional disinterest / non-engagement 0. Denial or non-awareness 1. Forced / compulsory 2. Marketing and funding purposes 3. Improve the quality of the program 4. Contribute to developing profession 5. For the sake of humanity & the cosmos
    16. 16. 16 Types/models of evaluation (Priest, 2001)  Needs assessment: What are some gaps that the program will fill?  Feasibility study: Given the constraints, can the program succeed?  Process evaluation: How is the implemented program progressing?  Outcome evaluation: Were program goals and objectives achieved?  Cost analysis: Was the program financially worthwhile or valuable?  Research: Will the program work elsewhere?
    17. 17. 17 Ways of gathering data  Questionnaires, surveys, checklists  Interviews  Documentation review  Observation  Focus groups  Case studies
    18. 18. 18 Models for getting started  Internal Euse existing staff resources Advantages: cost-efficient; high level of specific program knowledge  External Consultant, university, another organisation, graduate student, etc. Advantages: independent; expertise  Collaborative funding applications
    19. 19. 19 Example evaluation studies  Outward Bound Australia Colonial Foundation Study (Neill, 2001)  Young Endeavour (Berman, Finkelstein, & Powell, 2005)  Outdoor Education Group (Learning Journeys)  Melbourne Children's Institute Resilience/Mental Study Study (2010-2012)
    20. 20. 20 A typical evaluation process 1. Define purpose of evaluation 2. Audience – who needs to know? 3. Identify stakeholders 4. Establish program objectives & their operational definitions 5. Identify data collection methods 6. Establish research designs 7. Develop & pilot measures 8. Collect data 9. Analyse data 10. Report / disseminate - & get feedback 11. Consider/Act on recommendations
    21. 21. 21 Define purpose of evaluation 1. What is your motivation? 1. Evaluation or research? 2. Why are you wanting to evaluate? 1. Internal? 2. External? 2. What do you want to do with the evaluation? 3. What is the research question? 4. Note: Its not research if you can't be surprised by the results?
    22. 22. 22 Audience: Who needs to know? 1. Humanity? 2. Local society? 3. Funders? 4. Parents / School community? 5. Principal? 6. Program manager? 7. Instructors? 8. Students?
    23. 23. 23 Identify stakeholders? Who has valuable information to help develop a comprehensive picture? 1. Local community? 2. Outdoor education staff? 3. Client organisation staff? 4. Students? 5. Family? 6. Environment?
    24. 24. 24 Program objectives & operational definitions
    25. 25. 25 Ways of gathering data  Questionnaires, surveys, checklists  Interviews  Documentation review  Observation  Focus groups  Case studies
    26. 26. 26 Program objectives & operational definitions Purposes / outcomes Description Recreational, Physical Leisure (fun, relaxation, enjoyment), Physical fitness, Outdoor skills training Educational Direct (subject knowledge) and indirect (e.g., Academic self-concept) Developmental Personal and social development, life skills and functionality of behaviour Therapeutic, Redirectional Improve dysfunctional personal and group behaviour patterns Environmental Environmental attitude, knowledge, and behaviour
    27. 27. 27 Dissemination / Reporting forms  Academic article  Conference presentation  Report: Technical? Non-technical?  Executive summary  Seminar / briefing  News release / Popular article  Student thesis  Website  Video
    28. 28. http://wilderdom.com/wiki/Evaluation 28  Berman, M., Finkelstein, J., & Powell, M. (2005). Tall ships and social capital: A study of their interconnections. International Journal of the Humanities, 2(2).  Priest, S. (2001). A program evaluation primer. Journal of Experiential Education, 24(1), 34-40. Retrieved January 10, 2010, from http://academic.evergreen.edu/curricular/atpsm References

    ×