Assessing with confidence
Jon Rosewell, The Open University
Confidence-based marking (CBM) is an assessment method which asks the student not only to provide the answer to a question, but also to report their level of confidence (or certainty) in the correctness of their answer. They need to consider this carefully because it affects the marks they are awarded: a student scores full marks for knowing that they know the correct answer, some credit for a tentative correct answer but are penalised if they believe they know the answer but get it wrong. There are several motivations for using CBM: it rewards care and effort so engendering greater engagement, it encourages reflective learning, and it promises accuracy and reliability.
CBM has had niche success in the past in the context of medical training and recently may have a found a new niche in the context of regulatory compliance; these are both areas where assessment of competency and mastery is expected. However, CBM has not been widely adopted in other areas of education.
In this talk I will review the CBM landscape and ask why CBM is not used more widely. What are the benefits claimed and how robust is the evidence? How should CBM be presented to the students? Do they need training to understand how the system works? Is it a fair method of assessment? Does it disadvantage any category of student? How does it fit with ideas around ‘assessment for learning’ and ‘reflective learning’?
Confidence-based marking could offer both the student and teacher greater insight into a student’s understanding than the standard fare of e-assessment, the multiple-choice quiz. It is a technique that we should therefore keep under consideration.
Presentation given by Murli K S, CEO, 24×7 Guru on July 15,2011 at WORLD EDUCATION SUMMIT (www.worldeducationsummit.net) in the School Education Track: FROM CONVENTIONAL ASSESSMENT PRACTICES TO CONTINUOUS AND COMPREHENSIVE EVALUATION (CCE): A REVIEW OF BEST PRACTICES
Presentation given by Murli K S, CEO, 24×7 Guru on July 15,2011 at WORLD EDUCATION SUMMIT (www.worldeducationsummit.net) in the School Education Track: FROM CONVENTIONAL ASSESSMENT PRACTICES TO CONTINUOUS AND COMPREHENSIVE EVALUATION (CCE): A REVIEW OF BEST PRACTICES
Opening up multiple choice - assessing with confidenceJon Rosewell
This presentation presents a new online question style, Open CBM (Certainty/Confidence Based Marking).
This achieves an open style of question (similar to a free-text or numeric question) where the student doesn't pick from possible answers, but retains the robust and easy implementation of a multiple choice (MCQ) question.
It achieves this by appropriating the technique of certainty/confidence-based marking (CBM). In CBM, a student both selects an answer and also their level of confidence: they score full marks for knowing that they know the correct answer, some credit for a tentative correct answer but are penalised if they believe they know the answer but get it wrong.
An Open CBM question is presented in two stages. Initially, the question is presented with no answer options visible; instead the student must set their confidence level that they know the answer. Only then are the possible answers are revealed and the student answers as a normal MCQ. The marking scheme follows standard CBM practice. Mechanically the question remains a simple MCQ: answer matching is trivial and robust, questions are easy to implement, and existing question banks can be reused. However, to the student, the question is effectively transformed from closed MCQ to an open question. They need to formulate an answer first before they can decide their confidence in their answer, so they must decide their answer in the absence of any positive or negative clues, reducing the chance of misconceptions, or working backwards.
International Journal of Mathematics and Statistics Invention (IJMSI) inventionjournals
International Journal of Mathematics and Statistics Invention (IJMSI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJMSI publishes research articles and reviews within the whole field Mathematics and Statistics, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
As befits its title, Technologies in practice (TM129) takes a practical focus to learning, with up to 50% of study time having a practical aspect. The tutorial program should support this and in the past some tutors have found innovative ways of bringing practical demonstrations or exercises into their face-to-face sessions, for example demonstrating a robot vacuum cleaner or setting up an ad-hoc network of students’ laptops.
Producing online tutorials with an equivalent practical focus is a challenge. For TM129 we have developed a set of labcasts which deliver practical-focused synchronous tutorial events to all students, with one demonstration for each of the three blocks of the course: Robotics, networking and Linux. These labcasts are practical demonstrations which explore equipment and techniques which extend the coverage of the module. They move beyond video by the use of ‘widgets’ and a chat window which provide opportunities for students to engage actively with the demonstration. We will briefly outline these activities and present some student evaluation results.
We discuss how we plan to extend these activities into remote practical activities using OpenSTEM lab facilities. These will allow students to undertake further practical work where the student directly controls the practical activity.
We will present a framework of possible use-cases for remote practical activities, considering group size, synchronicity and locus of control; discuss some of the technological and pedagogical implications; and review progress towards delivering engaging practical activities at a distance.
A talk delivered at The Open University STEM Teaching Conference 6 Feb 2020
OpenStudio and Digital Photography: creating and sharing better imagesJon Rosewell
OpenStudio was created for the Open University course 'T189 Digital Photography: creating and sharing better images', and continues to be used in the current version TG089 run in partnership with the Royal Photographic Society. I will discuss the pedagogy of the course, the role of OpenStudio within it, and how OpenStudio is perceived by students.
Opening up multiple choice - assessing with confidenceJon Rosewell
This presentation presents a new online question style, Open CBM (Certainty/Confidence Based Marking).
This achieves an open style of question (similar to a free-text or numeric question) where the student doesn't pick from possible answers, but retains the robust and easy implementation of a multiple choice (MCQ) question.
It achieves this by appropriating the technique of certainty/confidence-based marking (CBM). In CBM, a student both selects an answer and also their level of confidence: they score full marks for knowing that they know the correct answer, some credit for a tentative correct answer but are penalised if they believe they know the answer but get it wrong.
An Open CBM question is presented in two stages. Initially, the question is presented with no answer options visible; instead the student must set their confidence level that they know the answer. Only then are the possible answers are revealed and the student answers as a normal MCQ. The marking scheme follows standard CBM practice. Mechanically the question remains a simple MCQ: answer matching is trivial and robust, questions are easy to implement, and existing question banks can be reused. However, to the student, the question is effectively transformed from closed MCQ to an open question. They need to formulate an answer first before they can decide their confidence in their answer, so they must decide their answer in the absence of any positive or negative clues, reducing the chance of misconceptions, or working backwards.
International Journal of Mathematics and Statistics Invention (IJMSI) inventionjournals
International Journal of Mathematics and Statistics Invention (IJMSI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJMSI publishes research articles and reviews within the whole field Mathematics and Statistics, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
As befits its title, Technologies in practice (TM129) takes a practical focus to learning, with up to 50% of study time having a practical aspect. The tutorial program should support this and in the past some tutors have found innovative ways of bringing practical demonstrations or exercises into their face-to-face sessions, for example demonstrating a robot vacuum cleaner or setting up an ad-hoc network of students’ laptops.
Producing online tutorials with an equivalent practical focus is a challenge. For TM129 we have developed a set of labcasts which deliver practical-focused synchronous tutorial events to all students, with one demonstration for each of the three blocks of the course: Robotics, networking and Linux. These labcasts are practical demonstrations which explore equipment and techniques which extend the coverage of the module. They move beyond video by the use of ‘widgets’ and a chat window which provide opportunities for students to engage actively with the demonstration. We will briefly outline these activities and present some student evaluation results.
We discuss how we plan to extend these activities into remote practical activities using OpenSTEM lab facilities. These will allow students to undertake further practical work where the student directly controls the practical activity.
We will present a framework of possible use-cases for remote practical activities, considering group size, synchronicity and locus of control; discuss some of the technological and pedagogical implications; and review progress towards delivering engaging practical activities at a distance.
A talk delivered at The Open University STEM Teaching Conference 6 Feb 2020
OpenStudio and Digital Photography: creating and sharing better imagesJon Rosewell
OpenStudio was created for the Open University course 'T189 Digital Photography: creating and sharing better images', and continues to be used in the current version TG089 run in partnership with the Royal Photographic Society. I will discuss the pedagogy of the course, the role of OpenStudio within it, and how OpenStudio is perceived by students.
Quality assurance of MOOCs: The OpenupEd quality labelJon Rosewell
The OpenupEd quality label is a quality enhancement approach to e-learning, tailored specifically to MOOCs. I will briefly introduce the OpenupEd quality label, show how it relates to other e-learning quality frameworks, and outline the ways in which it can be used, ranging from informal self-assessment to a full external review. Which of the benchmarks could contribute to enhanced design of MOOCs? Are the benchmarks sufficiently detailed? Do they capture all important aspects?
Quality frameworks for e-learning (SIEAD 2018, Brazil)Jon Rosewell
A contribution to INTERNATIONAL SEMINAR ON OPEN AND DISTANCE EDUCATION (SIEAD-BR 2018) 22nd October 2018.
"Contributions from Open and Distance Education to Higher Education Quality: present and future"
"Contribuições da Educação Aberta e à Distância para uma Educação Superior de Qualidade: presente e futuro"
In this presentation I will suggest using a quality framework to help you think about and improve quality of e-learning. I start with some general observations about quality and the need for quality frameworks. I then discuss two specific frameworks: the well-established E-xcellence benchmarks for e-learning, and the OpenupEd framework which as been specifically aligned at MOOCs. Finally I return to some more practical advise, particularly about thinking about the learning design of a course at an early stage.
The Open University, eSTEeM Conference, 25 April 2017
Summary
Find out how the OpenSTEM lab can be used to support remote access to tutor-led practical work in robotics and other technologies.
Abstract
As befits its title, Technologies in practice (TM129) takes a practical focus to learning, with up to 50% of study time having a practical aspect. The tutorial program should support this and in the past some tutors have found innovative ways of bringing practical demonstrations or exercises into their face-to-face sessions, for example demonstrating a robot vacuum cleaner or setting up an ad-hoc network of students’ laptops.
Producing online tutorials with an equivalent practical focus is a challenge. The creation of the OpenSTEM lab provides an opportunity to meet this challenge. Part of the HEFCE and OU funding for the OpenSTEM lab has provided five large ‘Baxter’ robots which will be accessible remotely as well as two which will be used at residential school. The lab also provides racked equipment bays for smaller remote access experiments, such as those being developed for the electronics curriculum. For a large population module such as TM129, this infrastructure provides an opportunity to roll-out practical-focused synchronous tutorial events to all students, provided the activities are well designed and scripted so that they can be delivered by a number of tutors.
In this presentation I will review the possible use-cases for remote practical activities, discuss some of the technological and pedagogical challenges, and review progress towards delivering engaging practical activities at a distance.
Robot explorers: Gender and group attitudes to STEM: a pilot evaluation of an...Jon Rosewell
Gender and group attitudes to STEM: a pilot evaluation of an outreach robotics activity.
Alice Peasgood, Jon Rosewell, Tony Hirst
Abstract
Women are underrepresented in Science, Technology, Engineering and Mathematics (STEM) subjects in higher education (HE), although attitudes and participation in STEM are less polarised at younger ages. Outreach activities that aim to inspire and enthuse school-age students may help girls to consider study and careers in STEM subjects.
The Royal Institution run extra-curricular ‘masterclasses’ that aim to inspire school students in mathematics. Our session in a series of secondary maths masterclasses uses a hands-on robotics activity based on the theme of ‘robot explorers’. Students work in small groups to solve the challenge of programming a small mobile robot to navigate by applying their maths and programming skills. This pilot study looked at the possible influence of gender and friendship groups on attitudes to STEM in the context of that activity.
Those attending the masterclass series were Year 9 students nominated by East London schools. Students completed a short evaluation sheet for the session and reported whether they knew others in their group. An observer noted whether boys or girls used the computer, held the robot, and similar measures. All data was collected anonymously and the study was approved by the OU Human Research Ethics Committee (HREC/2016/2238/Rosewell/1).
Preliminary results suggest that girls enjoyed the class more than boys. Girls also showed a greater increase in level of interest in robotics, although from a lower level than boys. There is a suggestion that individuals who found themselves in a group in which they had no friends reported a lower score for enjoyment.
The importance of friendship to the enjoyment and learning experienced in small group activity should be considered in the design of extra-curricular activities if they are to meet their stated aim of enthusing young students.
Next steps for excellence in the quality of e-learning (EADTU Paris masterclass)Jon Rosewell
Overview of Excellence NEXT project for quality assurance in e-learning, presented as part of masterclass at EADTU conference, Paris, 2013. [http://conference.eadtu.eu/]
A presentation on 'MOOCs and Quality Issues' given at a workshop organised by the QA-QE special interest group of the UK Higher Education Academy (HEA) [http://qaqe-sig.net/?page_id=8]
A speculation on the possible use of badges for learning at the UK Open Unive...Jon Rosewell
There has recently been a flurry of interest in supporting the idea of using ‘badges’ to recognise learning, particularly due to the Mozilla Open Badges project (http://openbadges.org/) and the funding channelled through the 2012 Digital Medial and Learning Competition (http://www.dmlcompetition.net/). Badges offer the potential of rewarding informal learning and reaching non-traditional learners.
This paper speculates on ways in which badges for learning could fit into the offering of the UK Open University, and exposes some of the tensions that badges raise.
[Paper presented at European Association of Distance Teaching Universities (EADTU) conference, Cyprus, 27-28 Sept 2012]
Badges for Nature (HASTAC/DML proposal)Jon Rosewell
‘Badges for Natural History’ will recognize and reward the knowledge and skills of the new generation of naturalists that are making a great contribution to our understanding of the world’s biodiversity. These badges will be issued first by a group of eight projects from across the globe. Badge earners will be able to move their badges between sites as they share their knowledge and experience of natural history across the world.
Next Steps for Excellence in the Quality of e-LearningJon Rosewell
The development of e-learning has progressed to a stage where it is becoming part of mainstream provision in higher education. Therefore the issue of assessing and sustaining the quality of e-learning must now come to the fore. Quality assessment in higher education is well-established in relation to learning and teaching generally, but what methods can be used to establish quality in the domain of e-learning?
The E-xcellence methodology for assessing quality in e-learning (EADTU 2009) is securing recognition by European and international learning organisations. It was designed to be applied to the design and delivery of e-learning in both distance learning and blended learning contexts. It supports a range of uses, from accreditation by external agencies to process improvement through internal review.
The methodology presents principles of good practice in six domains of e-learning: strategic management; curriculum design; course design; course delivery; student support; and staff support. A total of 33 benchmark statements cover these domains, and are supported by a handbook for practitioners and guidance for assessors. The handbook includes principles for quality e-learning and exemplars of good practice. Amongst the tools is an online ‘QuickScan’ self-evaluation questionnaire based on the E-xcellence benchmarks which is highly valued as a focus for collaborative review of e-learning programmes.
The e-learning landscape has changed since the E-xcellence methodology was first developed. In particular, the use of Open Education Resources (OECD 2007) and the application of social networking tools (Mason & Rennie 2008) were not explicitly considered in the original benchmarks. Accordingly, the E-xcellence NEXT project was instigated to produce and evaluate a revision of the benchmark criteria, associated handbook and exemplars. This paper describes the project process and initial recommendations.
A consultation exercise was carried out among E-xcellence participants. Feedback from this was brought to participatory workshops at a European Seminar on QA in e-learning in June 2011. Following this exercise, the benchmark statements were revised and are now available in beta version.
The project resources (Quickscan and manual) are being used for a series of self-evaluation and assessment seminars held at European higher education institutions. Feedback from these assessment seminars will be used to finalise materials for publication late in 2012. At that point the E-xcellence Next project will offer to the higher education community a set of self-evaluation and quality assessment tools which are fully updated to encompass social networking, Open Educational Resources and other recent developments in e-learning.
Can computer-marked final assessment improve retention?Jon Rosewell
Distance learning modules (particularly low-cost introductory and enrichment modules) may show poor retention compared to traditional campus courses. The perceived difficulty of exams and end-of-module assessments (EMA) appears to deter some students from submitting. In contrast, interactive computer-marked assignments (iCMA) are typically attempted by most students.
Can retention therefore be improved by changing the format of part of the final assessment to an iCMA?
Robotics and the meaning of life is a 10-point, 10-week general-interest Open University module. The assessment comprised a mid-module iCMA and a final written EMA. The iCMA (a Moodle quiz) provided detailed feedback only after the submission deadline. The EMA included short-answer questions, a programming question and an essay. The EMA was script-marked and feedback limited to overall score and performance profile provided well after the end of the course.
The intervention simply replaced the script-marked short-answer questions by a second iCMA covering the same content with similar questions. The programming and essay questions were retained unchanged as a written, script-marked EMA.
The hypothesis to be tested was that retention would increase: students would be more likely to submit the final iCMA, their confidence would increase, and they would be motivated to submit the written EMA.
Quantitative data were gathered for patterns of submission, course completion and pass rates for two presentations (124 and 220 students); data were also available for thirteen previous presentations (1814 students). Structured interviews were carried out to probe student preferences, confidence and engagement.
More students submitted the iCMA (86%) than the EMA (81%). Although they had the same deadline, 91% of students submitted the iCMA before the EMA. They submitted the iCMA well in advance of the deadline (median 4 days 15 hrs) but kept the EMA open as long as possible (median 18 hrs before deadline; 11% submitted in the final hour). These patterns strongly suggest that students were more confident with the iCMA than the EMA. Completion rates were the highest recorded: 88% and 89% compared to 79% for pre-intervention presentations. Overall pass rates were also improved (83% and 85% c.f. 76%). This can be ascribed to improved submission rates alone: the pass rate and mean scores among those who submit were unchanged giving confidence that the assessment difficulty was unaltered.
Student interviews suggested that students did attempt the final iCMA before the EMA and had greater confidence in obtaining a good mark for the iCMA than the EMA. Students valued the mix of assessment methods and felt it produced a robust result; although some expressed concern over the correctness of computer marking, they appreciated the detailed feedback it provided.
This intervention suggests that a change of assessment format can improve student engagement and pass rates without compromising rigour.
QA in e-Learning and Open Educational Resources (OER)Jon Rosewell
Introductory slides for a workshop on updating the e-learning quality assurance benchmarks of the E-xcellence NEXT project http://www.eadtu.nl/e-xcellencelabel
Exploring Web 2.0 to support online learning communities: where technology me...Jon Rosewell
A presentation to kick off a workshop at ICL2009 conference, given by Giselle Ferreira, Wendy Fisher, Jon Rosewell & Karen Kear, The Open University. http://www.open.ac.uk/blogs/terg/
Equitability and Dominance in Online Forums: An Ecological ApproachJon Rosewell
Participation in online forums varies greatly: a few students post many messages, some post a few, and many only read. A rough ‘rule of thirds’ has been suggested (eg Mason 1989), but it is possible that this rule of thumb hides interesting structure.
However, similar patterns can be seen when analysing the abundance of species in ecological communities, so maybe indices of ecological diversity could also provide a useful characterisation of an online community. Such indices can unpick both ‘species richness’ (here number of participants) and equitability / dominance.
To explore this, 36 forums containing 27,000 messages were analysed to see if an ecological approach to online communities could offer useful insights.
Members of the OU Robotics Outreach Group have been running hands on school and community workshops using the Lego Mindstorms robot invention system. Typically, these activities have been based around remote control activities using prebuilt robots, programming workshops using prebuilt robots or hybrid workshops involving simple robot construction and programming tasks.
In this presentation, we describe a new activity format - a robot construction activity using a preprogrammed robot controller capable of solving a situated task based on the popular RoboCupJunior robot rescue challenge.
Biological screening of herbal drugs: Introduction and Need for
Phyto-Pharmacological Screening, New Strategies for evaluating
Natural Products, In vitro evaluation techniques for Antioxidants, Antimicrobial and Anticancer drugs. In vivo evaluation techniques
for Anti-inflammatory, Antiulcer, Anticancer, Wound healing, Antidiabetic, Hepatoprotective, Cardio protective, Diuretics and
Antifertility, Toxicity studies as per OECD guidelines
A Strategic Approach: GenAI in EducationPeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
Normal Labour/ Stages of Labour/ Mechanism of LabourWasim Ak
Normal labor is also termed spontaneous labor, defined as the natural physiological process through which the fetus, placenta, and membranes are expelled from the uterus through the birth canal at term (37 to 42 weeks
How to Make a Field invisible in Odoo 17Celine George
It is possible to hide or invisible some fields in odoo. Commonly using “invisible” attribute in the field definition to invisible the fields. This slide will show how to make a field invisible in odoo 17.
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
A workshop hosted by the South African Journal of Science aimed at postgraduate students and early career researchers with little or no experience in writing and publishing journal articles.
Safalta Digital marketing institute in Noida, provide complete applications that encompass a huge range of virtual advertising and marketing additives, which includes search engine optimization, virtual communication advertising, pay-per-click on marketing, content material advertising, internet analytics, and greater. These university courses are designed for students who possess a comprehensive understanding of virtual marketing strategies and attributes.Safalta Digital Marketing Institute in Noida is a first choice for young individuals or students who are looking to start their careers in the field of digital advertising. The institute gives specialized courses designed and certification.
for beginners, providing thorough training in areas such as SEO, digital communication marketing, and PPC training in Noida. After finishing the program, students receive the certifications recognised by top different universitie, setting a strong foundation for a successful career in digital marketing.
3. Why ask students about certainty?
Ask students:
● What do you know?
● How certain is your knowledge?
Wikipedia: https://en.wikipedia.org/wiki/Confidence-based_learning
How can we get students to
honestly report their certainty?
4. Confidence ≡ certainty
Confidence-based marking (CBM)
NB terminology: ‘certainty’ would be better, but ‘confidence’ has stuck
Confidence
Score
Correct Wrong
Low 1 0
Medium 2 -2
High 3 -6
Tentative & correct
Confidently correct
Cocksure – and wrong!
Gardner-Medwin & Curtin (2007) Certainty-Based Marking (CBM) for Reflective Learning and Proper Knowledge Assessment
[http://www.ucl.ac.uk/lapt/REAP_cbm.pdf]
5. Some trial questions
Question Taken by Mean score
What is 2 + 2? 238 2.51
What is derivative of x³? 223 -0.47
Who painted the 'Mona Lisa'? 212 2.14
Who is the 'Mona Lisa'? 208 0.21
Uncertainty principle -- whose? 207 0.03
Uncertainty principle -- formula? 218 0.01
Easy
Difficult
Tricky!
6. Do students honestly assess confidence?
Question High Medium Low
What is 2 + 2? 210 5 23
What is derivative of x³? 96 46 81
Who painted the 'Mona Lisa'? 165 21 26
Who is the 'Mona Lisa'? 56 41 111
Uncertainty principle -- whose? 62 31 114
Uncertainty principle -- formula? 39 21 158
7. Medical students
CBM – for learning and revision
Moodle: https://docs.moodle.org/30/en/Using_certainty-based_marking
10. Benefits to students
CBM – motivations
● Rewards care and effort
● Greater engagement
● Encourages reflective learning
● Encourages self-assessment
Thinkstock.com: 483452995
11. Do students like CBM?
Yes – regard it as fair and challenging, helpful to learning
and
No – less likely to do CBM than MCQ when optional
Schoendorfer, N., & Emmett, D. (2012). Use of certainty-based marking in a second-year medical student cohort: a pilot study. Advances in Medical
Education and Practice, 3, 139–43. doi:10.2147/AMEP.S35972
Nix, I., & Wyllie, A. (2011). Exploring design features to enhance computer-based assessment: Learners’ views on using a confidence-indicator tool
and computer-based feedback. British Journal of Educational Technology, 42(1), 101–112. doi:10.1111/j.1467-8535.2009.00992.x
Barr, D. A., & Burke, J. R. (2013). Using confidence-based marking in a laboratory setting: A tool for student self-assessment and learning. The
Journal of Chiropractic Education, 27(1), 21–26. doi:10.7899/JCE-12-018
12. Is CBM fair?
● No significant gender differences
● Very few students seem over-confident, but some were under-confident
‘In decision-rich occupations such as medicine, mis-calibration of reliability is a
serious handicap’ Gardner-Medwin (2014, p.6)
● Scores will generally be lower when marked as CBM than MCQ
but possible to scale to non-CBM marking to set grade boundaries
Gardner-Medwin, A. R., & Gahan, M. (2003). Formative and Summative Confidence-Based Assessment. In Proc. 7th International
Computer-Aided Assessment Conference (pp. 147–155). Retrieved from www.caaconference.com
Gardner-Medwin, T. (2014). CBM selftests at UCL: The past and the future of LAPT. Retrieved from
http://www.tmedwin.net/~ucgbarg/tea/SLMS2014_A4.pdf
14. What is wrong with MCQ?
MCQ
Pros:
Objective marking
Reliable marking
Easy to implement
Cons:
Distractors trivial
May engender misconceptions
Working backwards
Open question
Pros:
Numeric easy to mark
Tests deeper learning
Can find misconceptions
Cons:
Free text still difficult to mark
reliably
16. ‘Open’ CBM – what benefit?
MCQ
Pros:
Objective marking
Reliable marking
Easy to implement
Cons:
Distractors trivial
May engender misconceptions
Working backwards
Open CBM
Pros:
Open question
Reflection
…as MCQ
Cons:
Not always applicable
Intimidating?
Personality dependent?
17. 17
www.indegene.com/lifesciences/ipace
ipace
● ‘Global training partner to pharma companies’
● Need to establish that reps are properly trained for compliance
● Platform that delivers:
● Regular questions from a bank
● CBM assessment
● Mastery = all questions correctly answered more than once
25. Future of CBM?
Pros:
● Fit with competency and mastery assessment is a good one
● Use in formative / revision contexts avoids issues to do with
unconventional marking and assigning grade boundaries
Cons:
● Dislike of negative marking
● Poor platform support – but improving
● Difficulty of marking more complex question types
Repeat of talk about 5 years ago – what has changed?
Answer not much!
Will review why it is a good assessment technique
Not been able to persuade OU colleagues to use it
Remains niche in rest of world – but may have found a good niche
This a confidence-based question
Ask a question – often multiple choice, doesn’t have to be
But also ask students to report their certainty
Can give feedback as normal, mark as CBM.
Teachers & trainers need to know what students know and what they don’t know.
But both students and teachers need to know whether students are certain about what they know and don’t know – if not, there could be problems. At worst, they could think they know but be wrong – and therefore make mistakes. If they are uncertain, they can’t make correct decisions.
Certainty or confidence? Subtlety in language – claim that what students are indicating and what CBM encourages is certainty, not confidence as character trait.
In typical university settings, students are driven by marks!
Show an example later which has taken a gaming approach
Some examples:
First, student who is tentative – if they get it correct they get some marks, if they get it wrong get nothing.
Next, student who is both correct and certain – they get maximum marks.
Finally, student who is overconfident – and wrong. They get penalty.
So really important to judge certainty realistically – if you report high certainty, risk losing marks if you are wrong. If you report low certainty, you can’t score good marks.
Some examples:
An easy question that everyone is certain about – average score is high
A really difficult question on quantum physics – nearly everyone gets wrong, so score is close to zero.
The tricky one – something that many university students ought to know, but quite a few get wrong, so average mark is negative.
Detail shows how students are answering
Easy question: most answer with high confidence
Difficult question: most answer with low confidence
Tricky question: splits into those who know, and those who don’t.
Complicated graph!
First note there is lots of data. This is for students doing practice tests for learning & revision; not for serious assessment. Means you get full range of poor to good results.
Some students better than others – better students are to right of graph getting greater percentage correct. Weaker students are to left, getting fewer correct.
If they get more correct, then expect score to be higher – that’s on vertical scale. But you can score best by setting certainty correctly, so someone who got say 60% correct overall would get higher mark if they set certainty/confidence sensibly for each question.
Scores above the pale green line with corners show successfully judging confidence -- green line shows always setting confidence low, medium and high, but not adjusting for each individual question.
Corners represent places where switch from low to medium to high confidence should occur, if student knows only how good they are overall.
Most students, even for revision where marks don’t count, are setting confidence level sensibly
Few students look like they are doing really badly – negative marks overall!
But remember these are students exploring, not serious final assessment.
Data from Tony Gardner-Medwin’s medical students
Another way of looking at same data which highlights the effect of certainty – most results are above the line, indicating students are aware of where their knowledge is reliable
Under exam conditions, then spread is very much reduced.
Some low marks, but no negatives.
Nearly all students are showing marks above the line – that is, they are correctly assessing their certainty for each question and so maximising their scores compared to answering with certainty set overall.
So CBM is delivering accurate assessment of knowledge.
But CBM is not just about more accurate assessment.
Could also have positives for student learning – better engagement with assessment helps students learn.
Students don’t like extra ‘stress’ imposed by setting certainty – but it is good for them!
Certainty or confidence as character trait?
Can over/under-confidence be dissociated from knowledge in any case? Can argue that correct understanding of own knowledge is essential part of academic and professional practice.
Is used in medical contexts because acknowledged that realistic judgement of certainty is essential to skills: ‘in decision-rich occupations such as medicine, mis-calibration of reliability is a serious handicap’ Gardner-Medwin (2014, p.6)
An open question asks a question but gives no clues. Here simple number is expected, but more generally a phrase, sentence, paragraph…
A multiple-choice question gives options – choose the correct one.
A confidence-marked question also asks student to say how certain they are of their answer.
MCQs are well established – objective, reliable, easy to implement
But pedagogically not ideal – have some drawbacks.
Open questions might be better tools for learning – but they are difficult to implement on a computer
This is a variant of CBM
Starts with an open question.
We don’t ask students to submit answer immediately – instead they have to set their confidence.
Once set it is locked – can’t change.
Now reveal options – actually multiple choice.
Can give feedback as normal, mark as CBM.
Benefit of open CBM is
-- an open question, so benefits for reflection
-- retains benefits and avoids drawbacks of MCQ
-- has some drawbacks – not always easy to implement, and needs further research on personality issues
Ipace features:
-- bank of questions
-- few questions each week pushed to students
-- posed in CBM format
-- repeated until question answered successfully more than once, then dropped from pool
-- not used for summative assessment but for competency / mastery
Asked a question – have to set your confidence first.
Confidence/certainty expressed in terms of a bet – how mauch are you prepared to stake that you will get this right?
Virtual coins only!
Once certainty set (= stake bet), then select options
If incorrect, you lose your stake
If correct you win the bet – payback is 2 x stake
Leaderboards so competitive
Feedback on performance by topic and by confidence level
Badges as additional motivation
Fit to mastery: used in medical contexts because acknowledged that realistic judgement of certainty is essential to skills
Gardner-Medwin (2014, but Bender?) indicates higher accuracy cf standard marking – students identify uncertain answers so reduces variance so predictive accuracy & reliability of exam improves.
Complexity of setting up. Not much platform support – but now in Moodle and in Questionmark Perception.
Dislike of negative marking.
Difficulty of marking more complex question types: multiple- cf single-response questions, partial correct scores, differently weighted questions.