This document discusses evaluating coursework from a student perspective. It outlines common purposes of evaluation like quality assurance and improving teaching. The focus of evaluation should be on both teaching/teacher and learning/outcomes. Strategies mentioned include surveys, observations, interviews, pre-/post-testing. Drawbacks include low response rates and students not always accurately assessing teaching. The document advocates using a variety of quantitative and qualitative methods like those in Kirkpatrick's four-level model to gain a holistic view of a course's effectiveness and opportunities for improvement from both a student learning and teaching practice perspective.
The purpose for Kirkpatrick’s evaluation is to determine the effectiveness of a training program. According to this model, evaluation should always begin with level one, and then, as time and budget allows, should move sequentially through levels two, three, and four. Information from each prior level serves as a base for the next level\'s evaluation.
The purpose of Brinkerhoff’s SCM to prove and to improve impact. It is a cost effective way in determining which components of initiative are working and which are not, and reporting result in a way that organizational leaders can easily understand and believe.
The purpose for Kirkpatrick’s evaluation is to determine the effectiveness of a training program. According to this model, evaluation should always begin with level one, and then, as time and budget allows, should move sequentially through levels two, three, and four. Information from each prior level serves as a base for the next level\'s evaluation.
The purpose of Brinkerhoff’s SCM to prove and to improve impact. It is a cost effective way in determining which components of initiative are working and which are not, and reporting result in a way that organizational leaders can easily understand and believe.
This presentation gives a fundamental understanding about Kirkpatrick's four levels of evaluation model. It also includes a brief of the fifth level of evaluation by Philip that forms the Kirkpatrick-Philip Model.
Measuring the Impact of eLearning: Turning Kirkpatrick’s Four Levels of Evalu...Lambda Solutions
Access to webinar recording here: http://go.lambdasolutions.net/webinar-growing-trend-of-open-source-learning
Whether it’s to inform, to improve, to change—or a combination of these factors, training must have measurable outcomes that contribute to larger organizational goals. Good training evaluation techniques identify and measure the impact of learning on job performance and ultimately, organization-wide business results. When it comes to measuring eLearning, Donald Kirkpatrick’s Four Level of Evaluation model is one of the most widely used and respected worldwide.
Co-hosted by Paula Yunker, with 30+ years of instructional design experience and certification in Kirkpatricks Four Levels Evaluation—this webinar will explore why learning evaluation is an important component of any training program and how you can measure the application of learning beyond the learning event itself. We’ll discuss how to implement learning evaluation that’s practical and provides value but isn’t complicated, time consuming or expensive. Paula will also share her favorite learning evaluation resources after the webinar!
Check out the slides to learn more about:
- Why learning evaluation is critical for business results
- Kirkpatrick’s four levels of evaluation explained
- Aligning learning to organizational goals
- Typical challenges implementing evaluation in an organization
- Practical strategies for implementing learning evaluation
- Our favorite learning evaluation resources
Most of the support functions in an organisation fail to justify Return on Investment.
Here is the solution you have been looking for.
Please Note: It is not only that the training function can apply this method, but also the other support functions can also apply.
Kirkpatrick's Four-Level Training Evaluation ModelMaram Barqawi
Donald Kirkpatrick, Professor Emeritus at the University of Wisconsin and past president of the American Society for Training and Development (ASTD), first published his Four-Level Training Evaluation Model in 1959, in the US Training and Development Journal.
The model was then updated in 1975, and again in 1994, when he published his best-known work, "Evaluating Training Programs."
It is a four level training evaluation model.
It helps trainers to measure the effectiveness of their training in an objective way.
Kirkpatrick’s model is a worldwide standard for evaluating the effectiveness of training.
Curriculum Evaluation is the process of collecting data on a programme to determine its value or worth with the aim of deciding whether to adopt, reject, or revise the programme.
This presentation gives a fundamental understanding about Kirkpatrick's four levels of evaluation model. It also includes a brief of the fifth level of evaluation by Philip that forms the Kirkpatrick-Philip Model.
Measuring the Impact of eLearning: Turning Kirkpatrick’s Four Levels of Evalu...Lambda Solutions
Access to webinar recording here: http://go.lambdasolutions.net/webinar-growing-trend-of-open-source-learning
Whether it’s to inform, to improve, to change—or a combination of these factors, training must have measurable outcomes that contribute to larger organizational goals. Good training evaluation techniques identify and measure the impact of learning on job performance and ultimately, organization-wide business results. When it comes to measuring eLearning, Donald Kirkpatrick’s Four Level of Evaluation model is one of the most widely used and respected worldwide.
Co-hosted by Paula Yunker, with 30+ years of instructional design experience and certification in Kirkpatricks Four Levels Evaluation—this webinar will explore why learning evaluation is an important component of any training program and how you can measure the application of learning beyond the learning event itself. We’ll discuss how to implement learning evaluation that’s practical and provides value but isn’t complicated, time consuming or expensive. Paula will also share her favorite learning evaluation resources after the webinar!
Check out the slides to learn more about:
- Why learning evaluation is critical for business results
- Kirkpatrick’s four levels of evaluation explained
- Aligning learning to organizational goals
- Typical challenges implementing evaluation in an organization
- Practical strategies for implementing learning evaluation
- Our favorite learning evaluation resources
Most of the support functions in an organisation fail to justify Return on Investment.
Here is the solution you have been looking for.
Please Note: It is not only that the training function can apply this method, but also the other support functions can also apply.
Kirkpatrick's Four-Level Training Evaluation ModelMaram Barqawi
Donald Kirkpatrick, Professor Emeritus at the University of Wisconsin and past president of the American Society for Training and Development (ASTD), first published his Four-Level Training Evaluation Model in 1959, in the US Training and Development Journal.
The model was then updated in 1975, and again in 1994, when he published his best-known work, "Evaluating Training Programs."
It is a four level training evaluation model.
It helps trainers to measure the effectiveness of their training in an objective way.
Kirkpatrick’s model is a worldwide standard for evaluating the effectiveness of training.
Curriculum Evaluation is the process of collecting data on a programme to determine its value or worth with the aim of deciding whether to adopt, reject, or revise the programme.
Process of Assessment- B.Ed syllabus, assessment for learningMAITREYEE BISWAS
this pptx gives a brief description about how various assessment process are done in teaching learning process. it focuses on various methods and strategies used.
What is good assessment? It should be fair, reliable, reproducible, it should also provide learners with a good opportunity to demonstrate their learning, and also dissuade them from plagiarism.
Ann Wilson presents a strategy for developing good assessment across a course or programme and identify the assessment strategies used in courses and what the opportunities are for improvement. By the end of the session you will be able to identify the components of a good assessment strategy and have some useful ideas for improving your own assessments.
This power point is about the didactic assessment. It is all about the didactic assessment definitions, related concepts, types, and didactic assessment tools.
Similar to Interrogating evaluation 2015 inductionb (20)
BARRIERS TO BL & AI ADOPTION IN AFRICA 14092023 RITA KIZITOB.pdfRita Ndagire Kizito
In a world of rapid technological change, how can we ensure that the benefits of digital education are accessible to everyone?
Are we read to embrace these changes in African higher education? This presentation explores the role of diversity and inclusivity in shaping the future of digital learning in Africa
An introduction to Research Approaches in Higher Education for new or existing university teachers or academics interested in using research to inform their teaching.
In this presentation we interrogate the meaning of the term " Scholarship" in the " Scholarship of Teaching and Learning"(SoTL) . This is part of a process of conceptualising SoTL from its early introduction leading to its adoption within South African Higher education context.
Teaching and Learning beyond the pandemic RNKizito 30092022.pptxRita Ndagire Kizito
Post - pandemic, the existing higher education practice is going to require re-organisation if we are to build lasting practices for future generations
Seeking Identify as scholars in the digital age has become blurred . How does one stay relevant when the road is paved with digital contortions, artefacts , tools ? Are we scholars? academics ? academic scholars or digital scholars?
In a world where efficiency is superseding effectiveness, this presentation for Early Career Academics introduces the concept of Digital Scholarship through a Scholarship of Teaching and learning Lens.
Leveraging data to improve feedback processes: what counts in the journey fro...Rita Ndagire Kizito
A team presentation at the Bluenotes Virtual presentation where we introduce a data support system using a Wits Application and Blue Explorance to input and analyse course and teaching evaluation data . We then sketch the journey and give an account of the challenges encountered and how we are trying to address them.
Re-imagining Higher Education practice at Nelson Mandela Metropoltitan University (NMMU) . Developing a strategy to transform STEM undergraduate teaching.
Developing an educational philosophy statement or rationale during the design of a Postgraduate Diploma in Higher Education practice at the Nelson Mandela Metropolitan University.
Reflective tasks and their role in changing practice13092016Rita Ndagire Kizito
An introspective study examining the critical relationship between reflective tasks and their role in changing academic staff perspectives and practices through an analysis of participant responses to a Scholarship of Teaching and Learning (SoTL) certificate programme at the Nelson Mandela Metropolitan University (South Africa).
An introduction to a course design process - Carpe Diem - at the Nelson Mandela Metropolitan University based on the work of Gilly Salmon and Ale Armellini
2. Aim of this part of the session
• Interrogate ‘evaluation’ from own
perspective
• Come up with an evaluation
strategy
3. Common Purpose of evaluation
• Required by universities near end of
each course *
• Can be used by
administration/departments as an
important element in making
decisions about promotion
• Could be a source of great pride or
trepidation
(Cashin, 1999; Clayson, 2009)
4. Purposes of evaluation
audit
development
“quality assurance”
(Biggs, 2003; Edström, 2008; Patton, 1997)
appraising
teachers
developing/ improving
courses and teaching
effectiveness
“quality enhancement”
6. What should the focus of the
evaluation be?
• To check that something is working
What is that something ?(e.g. course,
degree programme, activity)
What do we mean by ‘working’ ? (what goal
should be achieved?)
• To figure out how ‘it’ can be improved
7. What should the focus of the
evaluation be?
Teaching
( teacher)
Learning
(outcomes)
Teaching
(process)
Learning
(process)
“Rather than ratings, teachers
should be asked to include
their course analyses in their
teaching portfolio in order to
show their ability to both
analyze the student learning
experience and the quality of
the student learning outcomes ,
and to improve these with
adequate course development
measures”
(Edström, 2008, p.104).
8. “Evaluation is often viewed as a test of
effectiveness – of materials, teaching
methods or whatnot - but this is the least
important aspect of it. The most important
is to provide intelligence on how to
improve these things”
What should the purpose of
evaluation be ?
( Bruner, 1966, as cited in Ramsden, 2003, p. 233).
9. Typical questions
• Have student attitudes changed?
• Has the lecturer approaches changed?
• Have student learnt something in class?
• What is happening in class?
• Have classroom practices changed?
• Are students engaged?
• Am I meeting students’ needs?
10. Evaluation strategies – student
attitudes & perceptions
Have student attitudes changed?
• Likert-scale questions & surveys
• Could be used to measure the effect of a
course, a degree or a long term change
over a number of years
11. Evaluation strategies – Lecturer
approaches
Have lecturer approaches to teaching
changed?
• Teaching Practices Inventories
• Can help departments reflect on teaching,
by allowing comparisons between different
courses/departments
12. Evaluation strategies – Student
learning
Have students learned anything?
• Measure student performance
• Important to standardise testing (using
locally developed or already developed
tools)
• Important to conduct pre and post-testing,
before and after a learning intervention
Is there evidence that the Graduate Attributes
have been adopted by the students?
13. Evaluation strategies – classroom
practices
What is going on in the class?
• Video recordings
• Peer observations (fellow lecturers) using
observation protocols
• This data can be used in your Teaching
Portfolio
Have classroom practices changed?
• Use of innovative teaching & learning
practices
• Providing useful feedback etc.
14. Evaluation strategies – student
engagement
Are students engaged?
• Observation of about 10 students to see if
they are engaged by recording student
activity
• What teaching & learning activities result in
high levels of student engagement?
15. Evaluation strategies – student
learning needs
Are students learning needs being met?
• Focus groups or Interviews
• Anonymous surveys/midterm surveys
(online, or on paper) – letter to the
facilitator
• Assignment by assignment survey (time it
took, approach, resources used etc. )
• Keep, Start, Stop
16. Kirkpatrick’s (1994) four level model
Level What is measured Examples
1 Reaction - changes in
perception ,
satisfaction levels
How students feel about
the learning experience
Feedback forms
2 Learning – changes in
knowledge, skills,
attitudes
Increase in student
knowledge and skills
Informal/informal assessment
before and after learning
interventions
3 Changes in behavior/
practices
How far learning is applied
in practice resulting in
personal changes
Observations and interviews
of students over time
4 Results - noticeable
changes in results or
conditions.
How far the module/course
impacts on program or
institutional factors (
student performance,
retention, throughput).
Use institutional data to
identify whether the
program/module shifts the
nature of student
participation/
performance/engagement.
Student surveys.
17. Some Drawbacks
• Students are not always good at evaluating
teaching effectiveness. Popularity is usually
mistaken for good teaching
• Response rate is always less than 100%
• Susceptible to bias - voluntary
participation/polarization
• Questions are usually general –missing the
finer details of practice
18. The UWC Evaluation Guideline document
Instruments
GENERIC
EVALUATION
DISCIPLINE
SPECIFIC
Question Bank
A standard set of questions that
students are invited to answer
anonymously at the end of each
course. ( often, the results are given
weight in promotion and tenure
decisions)
Run centrally and completed by all
students to provide the institution with
data on how students are performing
with regards to institutional strategies
and plans
20. References
• Biggs, J. (2003). Teaching for quality learning at university: What the
student does, Buckingham, UK: SRHE and Open University Press.
• Cashin, W. E. (1999). Student ratings of teaching: uses and misuses. Ch 2,
In P. Seldin and Associates (Eds). Changing practices in Evaluating
Teaching: A practical guide to improved faculty performance and
promotion/tenure decisions. Pp 25-44. Bolton, MA. Anker Publishing.
• Clayson, D. E. (2009). Student evaluations of teaching: Are they related to
what students learn? A meta-analysis and review of the literature. Journal
of Marketing Education. 31 (1), 16-30.
• Edström, K (2008), Doing course evaluation as if it matters most. Higher
Education Research and Development, 27 :2: 95-106
• Kirkpatrick, D.L. (1994). Evaluating Training Programs. The Four Levels.
San Francisco: Berrett- Koehler.
• Ramsden, P (2003). Learning to teach in higher education ( 2nd edition).
London: Routledge Falmer.
• Schimpf, N (2015) Evidence-based Active Learning:
Evaluating Classroom Practices. Presentation made at the University of the
Western Cape