Published on

Published in: Economy & Finance, Education
  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide


  1. 1. Learning about Learning – Evaluation of a National Child Protection Training Programme Kate Skinner Institute Lead: Research Development and Application
  2. 2. In this Presentation I will: <ul><li>Tell you about the training programme </li></ul><ul><li>Describe our evaluation methods </li></ul><ul><li>Identify what we learned from it </li></ul><ul><li>Invite your thoughts and views </li></ul>
  3. 3. The Programme: <ul><li>Followed child death where knowledge of child protection (CP) found lacking in social work staff who worked with adults </li></ul><ul><li>Funded by Scottish Government (formerly Scottish Executive) </li></ul><ul><li>Available to social workers from 32 local authorities in Scotland </li></ul>
  4. 4. The Programme (ii) <ul><li>Aimed to include learning about substance misuse, domestic violence and mental health </li></ul><ul><li>Comprised 4 days for adult services staff (2 days on CP and 2 days with staff from children’s services on joint working) </li></ul><ul><li>2 days for children’s services staff </li></ul><ul><li>Delivered locally by project staff </li></ul>
  5. 5. Programme (iii) <ul><li>Programme accredited for 20 credits (200 hours study) at SCQF level 9 (3 rd year of a 4 year degree) with written assignment </li></ul><ul><li>Hard copy materials not provided for participants tho’ virtual learning environment arranged for participants to access materials </li></ul>
  6. 6. The Evaluation (i) <ul><li>Commissioned via successful competitive tender </li></ul><ul><li>Funded by the Project </li></ul><ul><li>Commissioned in 2005 and completed by independent team from Universities of Stirling and Kingston in March 2007 </li></ul>
  7. 7. The Evaluation (ii) <ul><li>Based on Kirkpatrick’s (1994) four levels of evaluation </li></ul><ul><li>Used a multi-modal approach </li></ul>
  8. 8. Aims of the Study - To Evaluate impact on : <ul><li>Practitioners’ knowledge </li></ul><ul><li>Intra-agency cooperation </li></ul><ul><li>Intra-agency communication </li></ul><ul><li>Initial assessments </li></ul><ul><li>Ability to identify children at risk of harm </li></ul><ul><li>Practitioners’ confidence re roles and responsibilities </li></ul>
  9. 9. Study Design <ul><li>Classroom Observation </li></ul><ul><li>Scrutiny of Programme Materials </li></ul><ul><li>Scrutiny of Participants’ Feedback </li></ul><ul><li>Knowledge tests </li></ul><ul><li>Short vignettes in which participants applied learning </li></ul><ul><li>Trainers’ views on programme </li></ul><ul><li>Scrutiny of assessment grades </li></ul><ul><li>External Examiner’s Reports </li></ul><ul><li>Participants’ views on changes to practice </li></ul><ul><li>Survey of Managers </li></ul><ul><li>Interviews with participants </li></ul><ul><li>Interviews with service users </li></ul><ul><li>Examination of service users’ files </li></ul>
  10. 10. What does the Literature tell us? (i) <ul><li>Evaluation must be systematic & include transfer of learning in the workplace </li></ul><ul><li>(Baginsky and MacPherson, 2005; Ogilvie-Whyte, 2006) </li></ul><ul><li>Collaborative working is difficult </li></ul><ul><li>(Cooper et al, 2003; Huxham & Vangan, 2005) </li></ul><ul><li>There is a knowledge base to be learned (Shardlow et al, 2004) </li></ul><ul><li>Learning needs to connect to what people do (Rogers, 1974; Gardner, 2006) </li></ul>
  11. 11. What does the Literature tell us? (ii) <ul><li>Learning needs systematic preparation and support (Cherniss, 1998;Skinner & Whyte, 2004) </li></ul><ul><li>Learning is shared responsibility of commissioners, learners, managers and trainers (Curry et al, 1994) </li></ul><ul><li>Without involvement of all above, retention of learning and implementation will not occur systematically (Woodhouse and Pengelly; 1991 Fineman, 1997) </li></ul>
  12. 12. Findings (i) <ul><li>Little/no preparation of participants by managers or trainers </li></ul><ul><li>Participants had v low expectations of programme as trigger for practice change </li></ul><ul><li>Significant differences in delivery between project team members </li></ul><ul><li>Disappointing changes in level of knowledge </li></ul><ul><li>Major discrepancies between feedback and transfer of learning </li></ul>
  13. 13. Findings (ii) <ul><li>Some resistance to thorough evaluation of training as legitimate use of staff time </li></ul><ul><li>Assessment of learning given v low priority by participants (3% of whole population) </li></ul><ul><li>Self-report limited as measure for retention of learning </li></ul><ul><li>Little attention given to retention of learning by staff, managers and trainers </li></ul>
  14. 14. Findings (iii) <ul><li>Intra- and inter-agency communication and collaboration is difficult and requires dedicated learning programmes to both raise their profile and enable learning of techniques </li></ul>
  15. 15. Concerns (i) <ul><li>Rhetoric of measurement, effectiveness and value for money not backed up in practice </li></ul><ul><li>Self-reports viewed as sufficient proof of worth of training </li></ul><ul><li>Absence of reliable objective data on impact </li></ul>
  16. 16. Concerns (ii) <ul><li>Suspicion that very little practice change resulted despite expensive, competent training arrangements </li></ul><ul><li>Concern that government believe that training offers a speedy, reliable and productive response to a practice problem </li></ul>
  17. 17. Questions: <ul><li>Would it be better to do less training and focus more on retention? </li></ul><ul><li>Are we using research on how people learn? </li></ul><ul><li>Is it OK to go on a course and not expect to have to change what we do? </li></ul><ul><li>Do we need to do more evaluation of this type to understand more about what kind of learning we should be offering? </li></ul>