Myscience manages the national network of Science Learning Centres and the National STEM Centre in the UK to support STEM education. Effective evaluation of professional development (CPD) programs requires assessing impact on both teacher development and student outcomes over time. Research shows CPD is more effective when it is collaborative, sustained, explores evidence from practice, and involves 49 or more hours per year. Evaluations should assess planning, implementation and outcomes to improve CPD practices.
The tenth chapter of Effective HR deals with Training Evaluation and Management. Through this presentation know the significance of training and evaluation. Understand the training evaluation models and the methods of training evaluation. Donald Kirkpatrick’s training evaluation model is also discussed in the presentation.
For more such innovative content on management studies, join WeSchool PGDM-DLP Program: http://bit.ly/SlideShareEffectHR
Join us on Facebook: http://www.facebook.com/welearnindia
Follow us on Twitter: https://twitter.com/WeLearnIndia
Read our latest blog at: http://welearnindia.wordpress.com
Subscribe to our Slideshare Channel: http://www.slideshare.net/welingkarDLP
Driving student outcomes and success: What’s next for the retention pilot pro...LearningandTeaching
As part of the Navitas 2020 Strategic Project on Retention, Learning and Teaching Services has been investigating and evaluating current practice both within our colleges and externally, developing a Retention Driver Tree to identify the activities that make a difference to the student experience.
In a recent webinar, Maria Spies and Suneeti Rekhari unpacked retention strategies and explored deeper into the impact of current retention pilots at Deakin and La Trobe Colleges.
Maria Spies outlined the Retention Driver Tree and the factors contributing to student experience and success. Suneeti Rekhari explained the processes used to plan, implement and evaluate the retention interventions, and the early indicators and outcomes emerging from the Colleges. Through this presentation, they discussed what these initial findings mean for the Retention Driver Tree and the next steps in addressing retention.
Dear students get fully solved assignments
Send your semester & Specialization name to our mail id :
“ help.mbaassignments@gmail.com ”
or
Call us at : 08263069601
(Prefer mailing. Call in emergency )
This file accompanies the "Creating Assessments" session at the Academic Impressions conference titled "A Comprehensive Approach to Designing Online Courses", Dec 3-4, 2007, Austin TX
Measuring the Impact of eLearning: Turning Kirkpatrick’s Four Levels of Evalu...Lambda Solutions
Access to webinar recording here: http://go.lambdasolutions.net/webinar-growing-trend-of-open-source-learning
Whether it’s to inform, to improve, to change—or a combination of these factors, training must have measurable outcomes that contribute to larger organizational goals. Good training evaluation techniques identify and measure the impact of learning on job performance and ultimately, organization-wide business results. When it comes to measuring eLearning, Donald Kirkpatrick’s Four Level of Evaluation model is one of the most widely used and respected worldwide.
Co-hosted by Paula Yunker, with 30+ years of instructional design experience and certification in Kirkpatricks Four Levels Evaluation—this webinar will explore why learning evaluation is an important component of any training program and how you can measure the application of learning beyond the learning event itself. We’ll discuss how to implement learning evaluation that’s practical and provides value but isn’t complicated, time consuming or expensive. Paula will also share her favorite learning evaluation resources after the webinar!
Check out the slides to learn more about:
- Why learning evaluation is critical for business results
- Kirkpatrick’s four levels of evaluation explained
- Aligning learning to organizational goals
- Typical challenges implementing evaluation in an organization
- Practical strategies for implementing learning evaluation
- Our favorite learning evaluation resources
The training and development presentation contains information about importance, objective and needs of training and development and challenges present in training and development. It contains several review of literature. It explains about Kirkpatrick's model, Kaufman's five level model, CIRO model, CI PD partnership model and k under model. The purpose and methodologies also explained.
The tenth chapter of Effective HR deals with Training Evaluation and Management. Through this presentation know the significance of training and evaluation. Understand the training evaluation models and the methods of training evaluation. Donald Kirkpatrick’s training evaluation model is also discussed in the presentation.
For more such innovative content on management studies, join WeSchool PGDM-DLP Program: http://bit.ly/SlideShareEffectHR
Join us on Facebook: http://www.facebook.com/welearnindia
Follow us on Twitter: https://twitter.com/WeLearnIndia
Read our latest blog at: http://welearnindia.wordpress.com
Subscribe to our Slideshare Channel: http://www.slideshare.net/welingkarDLP
Driving student outcomes and success: What’s next for the retention pilot pro...LearningandTeaching
As part of the Navitas 2020 Strategic Project on Retention, Learning and Teaching Services has been investigating and evaluating current practice both within our colleges and externally, developing a Retention Driver Tree to identify the activities that make a difference to the student experience.
In a recent webinar, Maria Spies and Suneeti Rekhari unpacked retention strategies and explored deeper into the impact of current retention pilots at Deakin and La Trobe Colleges.
Maria Spies outlined the Retention Driver Tree and the factors contributing to student experience and success. Suneeti Rekhari explained the processes used to plan, implement and evaluate the retention interventions, and the early indicators and outcomes emerging from the Colleges. Through this presentation, they discussed what these initial findings mean for the Retention Driver Tree and the next steps in addressing retention.
Dear students get fully solved assignments
Send your semester & Specialization name to our mail id :
“ help.mbaassignments@gmail.com ”
or
Call us at : 08263069601
(Prefer mailing. Call in emergency )
This file accompanies the "Creating Assessments" session at the Academic Impressions conference titled "A Comprehensive Approach to Designing Online Courses", Dec 3-4, 2007, Austin TX
Measuring the Impact of eLearning: Turning Kirkpatrick’s Four Levels of Evalu...Lambda Solutions
Access to webinar recording here: http://go.lambdasolutions.net/webinar-growing-trend-of-open-source-learning
Whether it’s to inform, to improve, to change—or a combination of these factors, training must have measurable outcomes that contribute to larger organizational goals. Good training evaluation techniques identify and measure the impact of learning on job performance and ultimately, organization-wide business results. When it comes to measuring eLearning, Donald Kirkpatrick’s Four Level of Evaluation model is one of the most widely used and respected worldwide.
Co-hosted by Paula Yunker, with 30+ years of instructional design experience and certification in Kirkpatricks Four Levels Evaluation—this webinar will explore why learning evaluation is an important component of any training program and how you can measure the application of learning beyond the learning event itself. We’ll discuss how to implement learning evaluation that’s practical and provides value but isn’t complicated, time consuming or expensive. Paula will also share her favorite learning evaluation resources after the webinar!
Check out the slides to learn more about:
- Why learning evaluation is critical for business results
- Kirkpatrick’s four levels of evaluation explained
- Aligning learning to organizational goals
- Typical challenges implementing evaluation in an organization
- Practical strategies for implementing learning evaluation
- Our favorite learning evaluation resources
The training and development presentation contains information about importance, objective and needs of training and development and challenges present in training and development. It contains several review of literature. It explains about Kirkpatrick's model, Kaufman's five level model, CIRO model, CI PD partnership model and k under model. The purpose and methodologies also explained.
This is a presentation at the workshop on Emerging opportunities in post-graduate public health education for health systems development, Cape Town, 2015
The School of Public Health (SOPH) at the University of the Western Cape (UWC) hosted a two-part workshop series in May and October 2015, as part of its ongoing work with 15 sister institutions in Africa and the global South. The overall aim of the workshops was to explore emerging opportunities for expanding access to, and delivery of, post-graduate training in public health for people working in or managing health services/systems.
Importance of Outcome Based Education (OBE) in the Quality Enhancement of Hig...Md. Nazrul Islam
Outcome-Based Education (OBE),
Traditional Education system
Comparison of Traditional and Outcome-based education
Focus and Benefits of OBE
Origin of OBE
Program Educational Objectives (PEOs).
Program Outcomes (PO)
OBE Framework Mappings
Attainment of Outcomes:
Conclusion
References
What philosophical assumptions drive the teacher/teaching standards movement ...Ferry Tanoto
What philosophical assumptions drive the teacher/teaching standards movement today? Are standards dangerous?
Week 4 - Reading highlights
Falk, B., 2002 and Tuinamuana, K., 2011
Presentation to ResearchED London Sept 9th 2017judeslides
Findings of MSc dissertation research in to the impact of school inspection on the quality of teaching, with English practitioners using their experiences of Ofsted to review a rough theory of change.
Frances Raines In the past, I have worked with a prJeanmarieColbert3
Frances Raines
In the past, I have worked with a professional team, and our dynamic was dysfunctional. This was due to a lack of support from leadership and an actual hierarchy organization. We, as team members, felt less than because of how we were treated and talked to. Our voices that spoke and asked for change, support, and new ideas and suggestions to help better us individually and collectively as a team was not listened to or implemented. When going through our annual reviews with the manager, it was not a conversation. It was a meeting where we were expected to listen. I used my MPP to list my concerns within the organization, leadership, and team, which was not well received. This caused a stressful work environment because I was then concerned that my honesty would cause me to be terminated. Eventually, three months later, I did quit.
In research from Hughes et al. (2018), the Rocket Model is an effective method that leaders use because it is "straightforward and a practical approach to team building." We lacked buy-in, morale, mission/purpose, and goals. We could not buy into what the leadership wanted us to do because there was no instruction, trust, or appreciation for our input to accomplish the tasks. We were all unmotivated and only came to work because we had to pay our bills. We lacked a team structure, each person was self-serving, and we could never align or relate. There was always judgment and complaints. I never found a way to express my concerns without fear after that. Had we effectively been able to, it would have helped our team be successful.
Plan Learner Assessments
Akita Roberson
ID-5000 v4: Fundamentals of Instructional Design
Northcentral University
May 9, 2021
Informative techniques, diagnostic and evaluative measures are more prevalent. They provide
feedback on teacher performance, encouraging educators to develop and alter their training techniques.
They monitor students' success and provide continuing support those educators and teachers can use to
enhance their teaching and students can use to improve their learning. The overarching goal of
formative evaluation is to gather comprehensive data that can be employed to enhance education and
training as it happens. The application of the assessments, such as the essence of a metric, technique, or
self-evaluation, is what distinguishes it as "formative." Summative measures are provided towards the
conclusion of a specific educational period (Arabi, 2020). Besides, they are generally evaluative rather
than diagnostic. They are ideally adapted for evaluating student performance and achievement,
measuring the quality of training strategies, and monitoring progress against development goals.
Assessment is a cyclical method, and it is a never-ending evaluating process. It is constantly
revising and renewing the standard of education it provides. It is a form of evaluation that allows for
continuous improvement in the learni ...
Determinants of Lecturers Assessment Practice in Higher Education in Somaliaijejournal
This research investigated the determinants of lecturers' assessment practices in higher education institutions in Mogadishu, Somalia. The factors that determined the lecturer’s assessment practice were design, interpretation, application, and administration mechanisms. A quantitative research design was conducted. The questionnaire was used, Cronbach's alpha value is.917. This shows that the scale's internal consistency and reliability for this sample are quite excellent. r =.636, P = 0.000,.05., the findings revealed a significant, favorable, and robust relationship between design and lecturers' assessment practices. Also, the correlation table shows a good connection between assessment, interpretation, application, and lecturers' assessment practice. (Explained) (r =.575, p = 0.000,.05) (R =.516, p =.000, 0.05) there is a strong positive relationship between assessment design, interpretation, and application to lecturers’ assessment practice. I recommend that the administration of public and private higher education institutions focus on in-service training on how to upgrade the skills of lecturers toward assessment practice.
3A - Effective evaluation of the impact of CPD - What does research tell us
1. Myscience is an initiative of the White Rose University Consortium
(comprising the Universities of Leeds, Sheffield and York) and
Sheffield Hallam University. Myscience manages the national
network of Science Learning Centres, the National STEM Centre
and other programmes supporting STEM education.
Registered Office:
Myscience.co Limited, University of York, Heslington, York, North Yorkshire, YO10 5DD
Tel +44 (0) 1904 328300 Fax +44 (0) 1904 328328 Email enquiries@national.slcs.ac.uk
Websites www.sciencelearningcentres.org.uk and www.nationalstemcentre.org.uk
Myscience.co Limited, registered in England and Wales, Company Number 05081097.
Effective evaluation of the impact of CPD: What does research
tell us?
Background
Over the past ten years the evidence for the critical role of the teacher on pupil outcomes has been
supported by a number of systematic reviews (Curee, 2012; Schleicher, 2011).
The central question is:
“How do we determine the effects and effectiveness of activities designed to enhance the professional
knowledge and skills of educators so that they might in turn improve the learning of students?”
Guskey (2000)
Well-designed evaluation serves multiple purposes. It tells us about the quality of current practice as
well as informing us about how to develop this practice (Guskey, 2000).
Historically, evaluation has been seen as costly and time-consuming and often left to ‘experts’ who
are called in at the end and asked to determine if what has been done has made a difference. As a
result much evaluation of CPD programmes focused on collecting ‘soft’ evidence of impact on
teachers’ reactions to the CPD with limited regard to the collection of qualitative evidence on either
teacher development or on the impact on pupils’ outcomes.
This paper is concerned with identifying how to evaluate the impact of CPD on teachers’ development
and pupil outcomes and the implications for the provision of CPD to enable this to occur.
What is the research evidence of effective professional development?
We are starting with the premise that effective CPD should have outcomes on both teacher
development and pupil learning, so what can research tell us about what needs to be included in CPD
to support this process?
The CUREE report (2012) draws on published research and other evidence to address the question
of: “what are the characteristics of high quality professional learning for practitioners in education?”
The main focus of the report is the features of professional learning, for teachers and their leaders,
2. which lead to benefits for their pupils and students. They conclude that CPD for teachers is more
likely to benefit students if it is:
collaborative – involves staff working together, identifying starting points, sharing evidence
about practice and trying out new approaches.
supported by specialist expertise – usually drawn from beyond the learning setting.
focused on aspirations for students – which provides the moral imperative and shared
focus.
sustained over time – professional development sustained over weeks or months had
substantially more impact on practice benefiting students than shorter engagement.
exploring evidence from trying new things – to connect practice to theory, enabling
practitioners to transfer new approaches and practices and the concepts underpinning them
to practice multiple contexts.
The CPD approaches which demonstrated the characteristics linked to effectiveness included:
collaborative enquiry – peer-supported, collaborative, evidence based learning activities
taking place over an extended period coupled with risk taking and structured professional
dialogue about evidence.
coaching and mentoring – a vehicle for contextualising CPD and for embedding enquiry-
oriented learning in day to day practice.
networks – collaborations within and between schools depending upon and propelled to
success by CPD.
structured dialogue and group work – practised in pairs and small groups, providing
multiple opportunities for exploring beliefs and assumptions, trying out new approaches and
giving and receiving structured feedback.
However they found that, in the main, CPD practice in England does not often reflect these
characteristics. For example, despite the strength of evidence about the effectiveness of coaching
Ofsted found approaches to coaching were generally very weak (Ofsted, 2006). Additionally, Curee
(2012) found that in a number of international and UK based studies which help to expand, build
understanding or reinforce the knowledge base about effective CPD, which impacts on student
outcomes, an average of 49 hours spent on staff CPD over a year boosted student achievement by
21 percentile points, whereas more limited time (5 to 14 hours) showed a statistically significant effect
on student learning.
3. Guskey (2000) developed imperatives for improving professional development – the CPD needs to be
an ongoing process that is necessary to support change; include a series of extended, job embedded
learning experiences to provide educators with the opportunity to discuss, think about, try out and
develop new practices in an environment that values enquiry and experiment; and be part of a wider
international process to make positive changes and improvements rather than mainly for those
professionals doing an inadequate job.
The recent TALIS (2013) survey reports that teachers across 36 European countries indicated that
CPD in different topics has had an impact on their teaching, ie 66% of the teachers reported a
moderate or large positive impact of professional development from CPD about knowledge (50% in
England); with 59% reporting moderate or large positive impact of professional development from
CPD pedagogy of the subject fields (45% in England); 47% reported a moderate or large positive
impact of professional development focusing on student evaluation and assessment practices (50% in
England) with 47% on knowledge of the curriculum (42% in England); and 44% reported a moderate
or large positive impact of professional development addressing ICT skills for teaching (25%
England). The survey does not give details of the evidence of exactly how it had impacted on their
teaching.
What is effective practice in evaluating the impact of professional development?
Guskey (2000) describes three different types of evaluation:
1. Planning evaluation. Checking that the CPD is fit for purpose has clearly identified goals and
outcomes and that these are achievable and situated in the existing research.
2. Formative evaluation. The purpose of this type of evaluation is to provide ongoing evidence
of whether expected progress is being made by considering intermediate benchmarks of
success to determine what is working as expected and what difficulties must be overcome.
3. Summative evaluation. This evaluation is conducted at the completion of a programme or
activity to make judgements about its overall success. It describes what was accomplished,
what were the consequences (positive and negative), what were the final results (intended and
unintended), and whether benefits justified the costs. Unlike formative assessment, which is
used to guide improvements, summative evaluations provide decision-makers with the
information they need to make crucial decisions about the long term future of a programme or
activity.
4. These different phases of evaluation require different data to be collected using a variety of methods.
Test scores for example are often used in summative evaluations, while interviews and survey data
may be used more often to guide formative evaluation (Goodall et al, 2005).
Burns (2007) acknowledges that educational research has been weak in its ability to continuously
develop and refine a body of knowledge which is acknowledged as valid and reliable. So the
evaluation of professional development needs to identify what CPD practices are most likely to impact
on pupil outcomes and this needs to guide reforms in CPD specifically and educational programmes
in general.
In England, accountability measures for schools and colleges require them to show the impact of their
actions. For example, the revised Ofsted Framework (September 2013) requires schools to show the
impact of professional development on the quality of teaching overall and also on specific teachers.
To achieve an ‘outstanding judgement’ on leadership and management, schools need to focus
relentlessly on improving teaching and learning and provide focused professional development for all
staff which has a positive impact on pupil outcomes and teacher development. To do this effectively,
senior leaders need to collect evidence of the impact of the professional development over a period of
time using tools such as staff and pupil voice, lesson observations, recording and sharing reflections
as part of mentoring and coaching, and work scrutiny. Effective analysis of data, both formative and
summative, can be used to monitor impact on short, medium and long term pupil outcomes.
How can we improve the impact of evaluations on CPD?
Guskey outlines three key reasons why evaluations may not be as effective as they could be:
1. A focus on documentation rather than evaluation. Many evaluations consist merely of
summarising the activities undertaken as part of the CPD programme.
2. The evaluation is too shallow and does not address meaningfully the indicators of success.
Where some evaluation does exist, this often takes the form of participation satisfaction
questionnaires.
3. Evaluations are typically brief one-off events, often not planned for, that take place after the
event. As most meaningful change will be long term the evaluation of CPD should reflect this.
One key barrier to evaluation in education may be the perceived need to demonstrate a causal
relationship between professional development and improvements in pupil outcomes. Most schools
are engaged in systematic reform involving the simultaneous implementation of multiple innovations
(Fullan, 1992). Unlike research and evaluation in healthcare or other fields, the ability to control other
factors is limited. In most cases there are too many intervening variables to allow for simple causal
5. inferences (Guskey, 2000). However, it is possible to collect a strong body of evidence about whether
or not CPD is contributing to specific gains in pupil outcomes. By clarifying goals and identifying the
desired impact, collecting the necessary evidence will become easier. The reason evaluation is often
difficult is because the collection of evidence of impact is not planned for or gathered early enough.
Always seek proof but collect lots of evidence along the way (Guskey, 2000)
To ensure the systematic gathering of evidence Guskey suggests a clear set of evaluation guidelines:
Planning guidelines
o Clarify intended goals.
o Assess the value of the goals.
o Analyse the context and assess how it might influence implementation.
o Estimate the programme’s potential to meet the desired goals. Include a thorough cost-
benefit analysis.
o Determine how the goals can be assessed. Decide, upfront, what evidence you trust.
o Outline strategies for gathering evidence, including critical intermediate indicators that
might be used to identify problems or forecast final results.
Formative and summative guidelines
o Gather and analyse evidence on all five of the levels described by Guskey (see below)
as well as an ongoing cost-benefit analysis.
o Prepare interim and final reports that are clear, meaningful, and comprehensible to
those who will use the results.
Teachers often report that CPD improves their teaching and has an impact on pupils, but are
unable to provide real evidence of this impact. Planning for collecting evidence of impact is an
important part of effective evaluation and it is crucial to good CPD. Guskey (2000) proposes a
model of evaluation for CPD using five different levels:
Level 1: Participants’ reactions
Level 1 involves asking participants for their immediate reaction to the CPD.
Level 2: Participants’ learning from CPD
Level 2 ascertains the type and amount of the participants’ learning which has taken place –
this could be cognitive, affective or behavioural.
6. Level 3: Organisational support and change
Level 3 is about the impact the CPD has had on participants’ colleagues and institutions.
Organisations can help share and embed learning from CPD which can make it more
sustainable and have a motivational impact as well.
Level 4: Participants’ use of new knowledge and skills
Level 4 evaluates the impact that CPD has had over time (6 to 9 months) on the participants’
use of the new knowledge and skills and how this has impacted on classroom practice.
Evaluation at this level takes place over time, the length of which depends on the complexity
of the knowledge or skill to be acquired and the time participants have to develop their skills.
Level 5: Student outcomes
Level 5 is about the impact on pupils. Again this level of evaluation will take time but it is
important to ascertain how the teachers’ new knowledge or practices have improved pupils’
attainment and progress.
Guskey recommends working backwards, embedding pupil outcomes into planning the CPD activity
and its evaluation. This applies to all CPD whether within schools or colleges or externally provided
CPD to ensure that the final goal of improving pupil outcomes is central to the process.
Goodall et al (2005) suggest that in addition to the levels described by Guskey the cost effectiveness
of CPD should be part of its evaluation. CPD should not be undertaken if the cost to the system
outweighs the benefits. The use of effect size of educational interventions in the Meta analysis
undertaken by Hattie (2009), to assess the effectiveness of interventions on pupil outcomes, is an
example of cost-benefit analysis in education. However, what we know about the cost effectiveness of
CPD is still fairly limited and not part of most ongoing evaluation processes.
What does the National Science Learning Network have to offer?
The purpose of any teacher CPD, particularly the subject specific CPD provided by the National
Science Learning Network, is to ensure a positive change in pupils’ outcomes, yet as has been stated
the evaluative evidence to support this relationship is often weak or missing. The assessment often
relies on qualitative data of teachers’ opinions rather than any data related to pupil progress. Based
on the CPD programme for science teachers and support staff at the National Science Learning
Centre, Kudenko and Hoyle (2013) investigated how embedding particular strategies into CPD
increased the reported impact on students’ achievement which was backed up by ‘hard’ and rigorous
evidence.
7. Using the Guskey five levels of evaluation, the research analysed data from sources including impact
questionnaires, which teachers completed two or more months after each residential CPD session,
quantitative and qualitative accounts of teachers’ post-CPD actions and the evidence of outcomes
and impacts. Teachers tended to use two contrasting narratives to depict their post-CPD experience:
Narrative 1 was pupil-focused, structured and contained measurable evidence, while Narrative 2
contained a description of what and how teachers did and what they thought about the CPD
experience. It was found that teachers used structured measurable outcomes (Narrative 1) more
frequently when their CPD had contained specific training in ‘reflective practices’ and ‘action research’
methodologies. By including reflective practices in the CPD, teachers were more able to plan their
post-CPD actions and to collect evidence of pupil outcomes. This boosted their confidence and
motivation to continue to innovate, increased the sustainability of the changes from the CPD and
increased the dissemination to colleagues in school and beyond. This finding is consistent with the
studies that Curee (2012) noted on the value of ‘action research’ in CPD.
The National Science Learning Network has used this research to further develop their process of
evaluating CPD based on the Guskey (2000) model of evaluation.
The process map below outlines how the evaluation process is used in the Network’s ‘Impact toolkit’
to provide evidence of the impact of CPD over time. Documentation to support each step is available
as part of the Impact toolkit.
Essential to the CPD process is planning clear aims, objectives and outcomes, which are shared with
participants prior to the CPD, and they are given an opportunity to define their learning objectives
(Form B1) which helps tutors to tailor the CPD to the needs of the participants. During the CPD
participants are encouraged to record their learning on Form D1 and to agree actions or interventions
they are going to make as a result of the CPD. Participants are also encouraged to collect evidence of
8. their own knowledge, understanding or practice level as well as that of their colleagues and their
pupils. They identify their post-CPD actions or intervention and determine how they will collect
evidence of the impact of that intervention on themselves, their colleagues and their pupils (Form D2).
At the end of the CPD (or a series of CPD activities amounting to 0.5 day or more) participants
evaluate their immediate reaction (level 1 of Guskey’s model) through Form D3. About 6 to 8 weeks
after the CPD, participants are asked about their learning (Guskey’s level 2) using Form P1 and about
6 to 9 months after the CPD they are asked about the use of their learning (Guskey’s level 4) using
Form P2. The impact on colleagues (Guskey’s level 3) is evaluated through collecting evidence on
Forms P1 and 2 and a longer term evaluation at the end of the academic year (Form P3) which also
includes evidence of impact on pupil outcomes.
The outcomes of this evaluation of impact process and the evaluation tools referred to on the process
map can be found at: https://www.sciencelearningcentres.org.uk/impact-and-research/impact/impact-
toolkit/ and https://www.sciencelearningcentres.org.uk/impact-and-research/impact/impact-toolkit-
2013/
Discussion Points
How can we encourage colleagues to have a positive, if critical, mindset about changing their
attitude and practices of evaluating the impact of CPD?
How can Myscience consultants plan for the effective evaluation of the CPD they provide in
schools or colleges?
What are the challenges of supporting teachers to collect evidence of impact particularly on
student outcomes as part of CPD?
What adaptations might we need to evaluate different types of CPD eg online, in-class support
etc?
How can Myscience consultants support schools to collect evidence of the impact of the CPD
they are providing for schools or colleges?
How does the need to evaluate the impact of professional development over time support the
changing educational landscape in the UK?
9. References
Advisory Committee on Mathematics Education (ACME) (2007) Empowering teachers:
success for learners. http://www.acme-uk.org/media/14054/acmepdreport2013.pdf
Burns, T., Schuller, T. (2007) Evidence in education: Linking research and policy. OECD
http://www.oecd.org/edu/ceri/47435459.pdf
Cordingley, P. (2000) Teacher Perspectives on the Accessibility and Usability of Research
Outputs. Paper presented at the British Educational Research Association Annual Meeting
Curee (2012) Understanding What Enables High Quality Professional Learning. A report on
the research evidence. http://www.curee.co.uk/files/publication/[site-timestamp]/CUREE-
Report.pdf
Fullan, M., Hargreaves, A. (1992) Teacher development and educational change. Falmer
Goodall, J., Day, C., Lindsay, G., Muijs, D., Harris, A. I. (2005) Evaluating the impact of CPD,
University of Warwick
http://www2.warwick.ac.uk/fac/soc/cedar/projects/completed05/contprofdev/cpdfinalreport05.p
d
Guskey, T.R. (2000). Evaluating Professional Development, Thousand Oaks, Ca: Corwin
Press
Hattie, J. (2009) Visible Learning: A Synthesis of Over 800 Meta-Analyses Relating to
Achievement Routledge
Joyce, B., Showers, B. (2002) Student Achievement through Staff Development. ASCD
Kudenko, I., Hoyle, P. (2013) How to enhance CPD to maximise the measurable impact on
students’ achievement in Science. ESERA conference and publication 2013
Ofsted (2006) The logical chain: continuing professional development in effective schools.
Last accessed at http://www.ofsted.gov.uk/resources/logical-chain-continuing-professional-
development-effective-schools on 25/09/2014
Schleicher, A. (2011) “Building a High Quality teaching profession: lessons from around the
world” OECD
TALIS (2013) Teaching And Learning International Study 2013: Main findings from the survey
and implications for education and training policies in Europe.
http://ec.europa.eu/education/library/reports/2014/talis_en.pdf