Based on the information provided, this item is testing the learner's ability to make a clinical decision by diagnosing the most likely cause of the patient's presentation based on the key laboratory findings provided in the stem. The stem provides sufficient relevant information to answer the question without being tricked or misled. The alternatives provided are plausible and independent diagnoses to consider. This item follows best practices for writing multiple choice questions that test clinical decision making abilities.
This document discusses milestones and entrustable professional activities (EPAs) in medical education. It defines milestones as significant points in a learner's development that identify the knowledge, skills, and attitudes expected at each stage of training. Milestones provide learners with feedback on their progress and define competencies for assessment. The document also introduces EPAs, which are routine professional tasks that require specific competencies. EPAs can be used to structure work-based assessment of whether a learner has demonstrated the competence to independently perform important professional activities.
A journey towards programmatic assessmentMedCouncilCan
The document discusses programmatic assessment in medical education. It begins by outlining various assessment methods and frameworks for evaluating competencies. It then discusses research findings on the validity, reliability, and educational impact of assessment methods. Key findings include that no single method can adequately measure all competencies, and that both standardized and unstandardized methods are needed. Reliability increases with larger samples and aggregation of data from multiple methods and assessors. Assessment works best when it provides meaningful feedback to support learning. The document concludes by describing examples of programmatic assessment approaches that integrate various longitudinal methods to provide rich data for high-stakes decisions.
Overall, assessments are used either as a Programmatic Assessment or as a Learning Assessment. One of the most familiar learning assessments is the multiple choice assessment that reflects the typical pen and paper traditional classroom test (Popham, 2006). However, these tests are not very easy to construct to ensure validity due to unclear directions, ambiguous statements, unintended clues, complicated syntax and difficult vocabulary (Popham, 2006). Other learning assessments with construct validity, such as the essay and the reflective journal, tend to focus on student-centered pedagogy. These assessments are ideal for assessing the learning outcomes of the individual and increase the student’s personal responsibility for their own learning. This reading document provides a brief summary of assessment tools that are available for both programmatic and learning.
This document proposes a model for programmatic assessment that optimizes assessment for learning while arriving at robust decisions about learner progress. The model distinguishes between learning activities, assessment activities, and learner support activities throughout an ongoing curriculum. Individual assessments are designed to be maximally informative for learning, while a longitudinal program of various assessment methods contributes to certification decisions. The principles discussed include ensuring validity in standardized and non-standardized assessments, using both quantitative and qualitative data, and relying on expert judgement at various evaluation points. An example is provided of how this model could be applied to a blended TeleGeriatrics Nurse Training Course.
Cees van der Vleuten
Maastricht University
The Netherlands
www.maastrichtuniversity.nl/she
Presented at Perspectives in Competency Assessment
A Symposium by Touchstone Institute
www.touchstoneinstitute.ca
This document discusses the evolution of programmatic assessment in UK medical training over the past 30 years. It outlines how assessment has shifted from high-stakes exit exams to integrated programs that use workplace-based assessments like mini-CEX, DOPS, and CbD. Key organizations like the GMC, PMETB, and foundation program have developed principles of good assessment including assessing multiple competencies through various methods. The foundation program initially piloted four assessment tools but has since refined these to better provide feedback and identify trainees needing support. Overall, the document traces the progression towards valid programmatic assessment across medical education in the UK.
This document provides an overview of programme evaluation, including definitions, objectives, common designs, data used, and differences between research and evaluation. Programme evaluation is defined as a systematic process of gathering evidence to inform judgements about whether a programme is meeting its goals and how it can be improved. Key points include:
- Formative and summative evaluations have different objectives related to programme development and decision-making.
- Common designs include pre-post tests with or without control groups, and both quantitative and qualitative data are important.
- Internal and external evaluations have advantages and limitations.
- Kirkpatrick's model outlines levels of evaluating training from reactions to outcomes.
- Management-oriented approaches like CIPP model focus
The document discusses various methods for randomizing units in an experiment evaluating a social program or intervention. It covers choosing an appropriate unit of randomization based on how the intervention is administered and outcomes are measured. Common units discussed include individual, cluster/group, classroom and school levels. The document also addresses real-world constraints and provides examples of different randomization designs that can be used, including basic lottery, phase-in, rotation, encouragement, and varying treatment levels. It emphasizes the importance of randomization in obtaining an unbiased estimate of a program's causal impact.
This document discusses milestones and entrustable professional activities (EPAs) in medical education. It defines milestones as significant points in a learner's development that identify the knowledge, skills, and attitudes expected at each stage of training. Milestones provide learners with feedback on their progress and define competencies for assessment. The document also introduces EPAs, which are routine professional tasks that require specific competencies. EPAs can be used to structure work-based assessment of whether a learner has demonstrated the competence to independently perform important professional activities.
A journey towards programmatic assessmentMedCouncilCan
The document discusses programmatic assessment in medical education. It begins by outlining various assessment methods and frameworks for evaluating competencies. It then discusses research findings on the validity, reliability, and educational impact of assessment methods. Key findings include that no single method can adequately measure all competencies, and that both standardized and unstandardized methods are needed. Reliability increases with larger samples and aggregation of data from multiple methods and assessors. Assessment works best when it provides meaningful feedback to support learning. The document concludes by describing examples of programmatic assessment approaches that integrate various longitudinal methods to provide rich data for high-stakes decisions.
Overall, assessments are used either as a Programmatic Assessment or as a Learning Assessment. One of the most familiar learning assessments is the multiple choice assessment that reflects the typical pen and paper traditional classroom test (Popham, 2006). However, these tests are not very easy to construct to ensure validity due to unclear directions, ambiguous statements, unintended clues, complicated syntax and difficult vocabulary (Popham, 2006). Other learning assessments with construct validity, such as the essay and the reflective journal, tend to focus on student-centered pedagogy. These assessments are ideal for assessing the learning outcomes of the individual and increase the student’s personal responsibility for their own learning. This reading document provides a brief summary of assessment tools that are available for both programmatic and learning.
This document proposes a model for programmatic assessment that optimizes assessment for learning while arriving at robust decisions about learner progress. The model distinguishes between learning activities, assessment activities, and learner support activities throughout an ongoing curriculum. Individual assessments are designed to be maximally informative for learning, while a longitudinal program of various assessment methods contributes to certification decisions. The principles discussed include ensuring validity in standardized and non-standardized assessments, using both quantitative and qualitative data, and relying on expert judgement at various evaluation points. An example is provided of how this model could be applied to a blended TeleGeriatrics Nurse Training Course.
Cees van der Vleuten
Maastricht University
The Netherlands
www.maastrichtuniversity.nl/she
Presented at Perspectives in Competency Assessment
A Symposium by Touchstone Institute
www.touchstoneinstitute.ca
This document discusses the evolution of programmatic assessment in UK medical training over the past 30 years. It outlines how assessment has shifted from high-stakes exit exams to integrated programs that use workplace-based assessments like mini-CEX, DOPS, and CbD. Key organizations like the GMC, PMETB, and foundation program have developed principles of good assessment including assessing multiple competencies through various methods. The foundation program initially piloted four assessment tools but has since refined these to better provide feedback and identify trainees needing support. Overall, the document traces the progression towards valid programmatic assessment across medical education in the UK.
This document provides an overview of programme evaluation, including definitions, objectives, common designs, data used, and differences between research and evaluation. Programme evaluation is defined as a systematic process of gathering evidence to inform judgements about whether a programme is meeting its goals and how it can be improved. Key points include:
- Formative and summative evaluations have different objectives related to programme development and decision-making.
- Common designs include pre-post tests with or without control groups, and both quantitative and qualitative data are important.
- Internal and external evaluations have advantages and limitations.
- Kirkpatrick's model outlines levels of evaluating training from reactions to outcomes.
- Management-oriented approaches like CIPP model focus
The document discusses various methods for randomizing units in an experiment evaluating a social program or intervention. It covers choosing an appropriate unit of randomization based on how the intervention is administered and outcomes are measured. Common units discussed include individual, cluster/group, classroom and school levels. The document also addresses real-world constraints and provides examples of different randomization designs that can be used, including basic lottery, phase-in, rotation, encouragement, and varying treatment levels. It emphasizes the importance of randomization in obtaining an unbiased estimate of a program's causal impact.
The document discusses different methods for evaluating the impact of an education program called the Balsakhi program in India. It compares the results of 4 different evaluation methods: 1) pre-post comparison showed a test score gain of 26.42 points, 2) simple difference comparison showed Balsakhi students scored 5.05 points lower, 3) difference-in-differences estimated an impact of 6.82 points, and 4) a regression controlling for covariates estimated an impact of 1.92 points. Randomization was proposed as the best method to construct a valid counterfactual for estimating true program impact.
The fifth webinar continues the momentum of the series as it focuses on providing concrete approaches for identifying barriers and enablers, emphasising behaviour change approaches.
READ MORE: http://bit.ly/2LOwbj0
Closing the Loop on Clinical Competency Based AssessmentsExamSoft
Presented by Karen Bobak, DC, EdD, Dean of Chiropractic, and Wendy Maneri, MS, DC, Associate Dean of Chiropractic Clinical Education and Health Centers, of New York Chiropractic College, discussed ways to effectively assessing student competency in a clinical setting is an essential element in professional healthcare education.
Effectively assessing student competency in a clinical setting is an essential element in professional healthcare education. Moreover, the use of assessment data to improve student learning is essential in order to meet program goals, professional standards, and provide quality patient care. Examples of strategies used to develop and implement a process of assessment / analysis / communication and change will be shared. Participants will be encouraged to develop a process framework while considering the challenges and opportunities that exist within their programs.
The document discusses various methods for assessing learners, including formative and summative assessments, and highlights the importance of balancing reliability, validity, cost and authenticity when selecting assessment tools. It compares methods for assessing different domains like knowledge, skills, problem-solving and attitudes. Taxonomies are useful for classifying learning objectives and selecting appropriate assessment methods matched to the level of learning.
Utilizing tools and resources to assess fellows decross sunday feb 24 1030-1100jakinyi
This document discusses tools and resources that can be used to assess gastroenterology fellows. It begins by outlining the presenter's needs as a program director with limited time. Process-based assessments that are artificial are deemed nearly useless. More meaningful assessments provide genuine growth opportunities through objective, high-volume tools with permanent records. Specific tools deemed useful include nurse, staff, patient, and lecture assessments. The GTE exam is considered the most useful tool as it is purely objective and identifies underperforming fellows early. Remediating struggling fellows requires a constructive approach with written plans and monitoring. Case examples demonstrate how feedback from nurses identified an interpersonal issue in one fellow, while another fellow was placed on notice to improve low GTE
This sample answer sheet corresponds with the eighth webinar in the Online Journal Club series, “How do young people make sense of cannabis evidence?"
The National Collaborating Centre for Methods and Tools is funded by the Public Health Agency of Canada and affiliated with McMaster University. The views expressed herein do not necessarily represent the views of the Public Health Agency of Canada.
NCCMT is one of six National Collaborating Centres (NCCs) for Public Health. The Centres promote and improve the use of scientific research and other knowledge to strengthen public health practices and policies in Canada.
NIHR Complex Reviews Support Unit (CRSU) - An IntroductionHEHTAslides
The NIHR Complex Reviews Support Unit (CRSU) was established in 2015 to support complex systematic reviews that are important to the UK NHS. In its first 18 months, the CRSU provided expertise and support to several Cochrane review groups on diagnostic test accuracy reviews, network meta-analyses, and other complex review methods. The CRSU also held workshops to build capacity for complex reviews and discussed challenges in meta-analyzing diagnostic test accuracy and network meta-analysis data.
This document discusses trends in health professions education. It covers topics such as outcome-based education, professionalism, learning through simulation, interprofessional education, and community-based medical education. Specifically, it outlines the goals of outcome-based education including balancing knowledge, skills, and attitudes. It provides examples of competency frameworks from organizations like ACGME. The document also discusses the importance of professionalism in medicine given changing public expectations. Additional sections cover how simulation can enhance learning and the benefits of interprofessional education and community-based training to better meet community health needs.
The document outlines the components of impact evaluation including needs assessment, theory of change, process evaluation, impact evaluation, and cost-effectiveness analysis. It discusses framing impact evaluation through a theory of change and using randomized evaluations as the gold standard for measuring a program's causal impact. Randomized evaluations compare outcomes between participants who are randomly assigned to a treatment group or control group to estimate the counterfactual.
Surveying the landscape: An overview of tools for direct observation and asse...MedCouncilCan
This document provides an overview of a framework and tools for direct observation and assessment in high-stakes settings. It discusses the challenges of incorporating workplace-based assessments into high-stakes evaluations due to issues with sampling, training, and measurement error. While there is limited psychometric evidence to support the use of workplace-based assessments in high-stakes contexts, jointly attesting to direct observation data may provide useful information to inform licensing decisions and identify gaps in assessment blueprints. The document advocates for more research on the reliability and validity of workplace-based assessments before incorporating local scores into high-stakes evaluations.
This document describes a study that evaluated the effects of using video screen captures of physicians in simulation-based learning experiences on associate degree nursing students' attitudes toward physician-nurse collaboration. Students completed a pre-test survey measuring their attitudes, then participated in a simulation using video of a physician. After debriefing, they completed the same survey as a post-test. Results found a statistically significant improvement in attitudes toward collaboration, indicating video captures can effectively simulate interactions with physicians without requiring live actors. The study demonstrated collaboration can be simulated with minimal technology investment.
Ovretveit implementation science research course 1day sept 11john
1. The document discusses a workshop on implementation science and research, which aims to explain what implementation science is, describe elements of an implementation program, explain strengths and limitations of implementation research studies, and plan an implementation study.
2. Implementation research mostly describes, evaluates and explains an implementation in different real-life settings. It involves assessing elements like content, structure, strategy, and methods using tools like CESSiM and REAIM.
3. Effective implementation is important for improving health outcomes and depends more on how interventions are implemented than just the intervention itself. Factors like context affect implementation success.
Achieving behaviour change for patient safety, Judith Dyson, Lecturer, Mental Health - University of Hull
Presentation from the Patient Safety Collaborative launch event held in London on 14 October 2014
More information at http://www.nhsiq.nhs.uk/improvement-programmes/patient-safety/patient-safety-collaboratives.aspx
Faster Improvement with Adaptive Implementation ResearchUCLA CTSI
Feb 3, 2016
Dr. John Ovretveit, Director of Research and Professor of Health Innovation and Evaluation at the Karolinska Institutet, presented as part of a seminar series on UCLA CTSI Dissemination, Improvement and Implementation Research.
This document discusses theory of change and its importance for evaluation. It begins by introducing theory of change and explaining that it is a process for exploring how change happens in a particular context. It then discusses building a theory of change by defining a program, its outcomes and intermediate steps, and identifying assumptions. The document explains that theory of change is important for evaluators to consider process and for programmers to be results-oriented. It also notes a common criticism is that theory of change can oversimplify programs.
This document summarizes a study that evaluated psychiatric nursing students' experiences in a simulated mental health ward prior to their clinical internship. The study aimed to understand students' perspectives on how the simulation prepared them for internship. Students completed a standardized evaluation scale after participating in simulations involving patient scenarios. Both quantitative and qualitative feedback was collected. Students reported that the simulation experience helped develop their clinical skills and critical thinking. They felt more prepared to care for patients. However, students also found the simulation experience anxiety-provoking. Suggestions for improving simulations included integrating them more frequently throughout the program to increase comfort and reducing the stressful environment.
This document summarizes a presentation on assessment and multiple choice questions (MCQs). It discusses how assessment guides student learning through the backwash effect. Formative assessment supports learning by allowing recurrent testing. Course leaders can influence learning by aligning teaching and examinations. The presentation then focuses on constructing high-quality MCQs, including considering different cognitive levels, question types, and crafting effective stems, options, and keys. Workshops are proposed to build MCQ databases, test questions, conduct talk-aloud protocols with students, and eventually digitalize the assessment process.
Here are 3 key questions the research group may want to consider:
1. Standardizing outcomes and assessments across sites while allowing for local flexibility. What core metrics and tools can be used consistently, while permitting some customization?
2. Developing a governance structure and processes that are inclusive yet efficient. How will decisions be made to balance needs of all members?
3. Determining authorship guidelines upfront to avoid future disputes. What qualifies someone as an author on multi-site studies?
The group should discuss these issues early to facilitate collaborative work and trust within the network. Clear policies can help maximize the benefits of this innovative research model.
'Demystifying Knowledge Transfer- an introduction to Implementation Science M...NEQOS
Powerpoint presentation from 'Demystifying Knowledge Transfer: an introduction to Implementation Science' - 28th May 2014.
Facilitated by Professor Jeremy Grimshaw and Dr Justin Presseau
This document outlines a dissertation that investigates whether un-moderated or moderated group participation is better for knowledge creation and convergence within an online community of practice (CoP). The study uses a mixed methods sequential explanatory design including a quasi-experimental pre-post test and qualitative interviews and content analysis. Results from the pre-post tests and qualitative analysis indicate that both moderated and un-moderated groups showed learning, but the moderated group performed better. The study suggests online collaboration can help build legal skills and minimize degraded judgments by facilitating knowledge convergence within the organization.
1) Competency-based medical education (CBME) is an outcomes-based approach that uses competencies as an organizing framework for designing, implementing, assessing, and evaluating medical education programs.
2) Traditional medical education focuses on knowledge acquisition with a fixed length and variable outcomes, while CBME emphasizes knowledge application with a variable length and defined outcomes.
3) Effective assessment in CBME uses a variety of objective measurement tools aligned with outcomes, incorporates direct observation and authentic tasks, and emphasizes formative assessment to drive future learning.
Competency-based assessment:The good, the bad, and the puzzlingMedCouncilCan
Three overlapping themes are discussed for effectively assessing competency:
1) Overcoming unintended consequences by reducing emphasis on exams as hurdles and promoting accountability for demonstration of learning. This involves quality improvement activities and using licensing data to facilitate learning plans.
2) Turning quality assurance into quality improvement by further integrating assessment across training with attention to improvement. This involves a formative testing platform and diagnostic assessments to feed data.
3) Ensuring authenticity by using portfolio-supported workplace assessments and increasing real world uncertainties in assessments. Examples include sequential OSCE stations and requiring reflection on alternative actions.
The document discusses different methods for evaluating the impact of an education program called the Balsakhi program in India. It compares the results of 4 different evaluation methods: 1) pre-post comparison showed a test score gain of 26.42 points, 2) simple difference comparison showed Balsakhi students scored 5.05 points lower, 3) difference-in-differences estimated an impact of 6.82 points, and 4) a regression controlling for covariates estimated an impact of 1.92 points. Randomization was proposed as the best method to construct a valid counterfactual for estimating true program impact.
The fifth webinar continues the momentum of the series as it focuses on providing concrete approaches for identifying barriers and enablers, emphasising behaviour change approaches.
READ MORE: http://bit.ly/2LOwbj0
Closing the Loop on Clinical Competency Based AssessmentsExamSoft
Presented by Karen Bobak, DC, EdD, Dean of Chiropractic, and Wendy Maneri, MS, DC, Associate Dean of Chiropractic Clinical Education and Health Centers, of New York Chiropractic College, discussed ways to effectively assessing student competency in a clinical setting is an essential element in professional healthcare education.
Effectively assessing student competency in a clinical setting is an essential element in professional healthcare education. Moreover, the use of assessment data to improve student learning is essential in order to meet program goals, professional standards, and provide quality patient care. Examples of strategies used to develop and implement a process of assessment / analysis / communication and change will be shared. Participants will be encouraged to develop a process framework while considering the challenges and opportunities that exist within their programs.
The document discusses various methods for assessing learners, including formative and summative assessments, and highlights the importance of balancing reliability, validity, cost and authenticity when selecting assessment tools. It compares methods for assessing different domains like knowledge, skills, problem-solving and attitudes. Taxonomies are useful for classifying learning objectives and selecting appropriate assessment methods matched to the level of learning.
Utilizing tools and resources to assess fellows decross sunday feb 24 1030-1100jakinyi
This document discusses tools and resources that can be used to assess gastroenterology fellows. It begins by outlining the presenter's needs as a program director with limited time. Process-based assessments that are artificial are deemed nearly useless. More meaningful assessments provide genuine growth opportunities through objective, high-volume tools with permanent records. Specific tools deemed useful include nurse, staff, patient, and lecture assessments. The GTE exam is considered the most useful tool as it is purely objective and identifies underperforming fellows early. Remediating struggling fellows requires a constructive approach with written plans and monitoring. Case examples demonstrate how feedback from nurses identified an interpersonal issue in one fellow, while another fellow was placed on notice to improve low GTE
This sample answer sheet corresponds with the eighth webinar in the Online Journal Club series, “How do young people make sense of cannabis evidence?"
The National Collaborating Centre for Methods and Tools is funded by the Public Health Agency of Canada and affiliated with McMaster University. The views expressed herein do not necessarily represent the views of the Public Health Agency of Canada.
NCCMT is one of six National Collaborating Centres (NCCs) for Public Health. The Centres promote and improve the use of scientific research and other knowledge to strengthen public health practices and policies in Canada.
NIHR Complex Reviews Support Unit (CRSU) - An IntroductionHEHTAslides
The NIHR Complex Reviews Support Unit (CRSU) was established in 2015 to support complex systematic reviews that are important to the UK NHS. In its first 18 months, the CRSU provided expertise and support to several Cochrane review groups on diagnostic test accuracy reviews, network meta-analyses, and other complex review methods. The CRSU also held workshops to build capacity for complex reviews and discussed challenges in meta-analyzing diagnostic test accuracy and network meta-analysis data.
This document discusses trends in health professions education. It covers topics such as outcome-based education, professionalism, learning through simulation, interprofessional education, and community-based medical education. Specifically, it outlines the goals of outcome-based education including balancing knowledge, skills, and attitudes. It provides examples of competency frameworks from organizations like ACGME. The document also discusses the importance of professionalism in medicine given changing public expectations. Additional sections cover how simulation can enhance learning and the benefits of interprofessional education and community-based training to better meet community health needs.
The document outlines the components of impact evaluation including needs assessment, theory of change, process evaluation, impact evaluation, and cost-effectiveness analysis. It discusses framing impact evaluation through a theory of change and using randomized evaluations as the gold standard for measuring a program's causal impact. Randomized evaluations compare outcomes between participants who are randomly assigned to a treatment group or control group to estimate the counterfactual.
Surveying the landscape: An overview of tools for direct observation and asse...MedCouncilCan
This document provides an overview of a framework and tools for direct observation and assessment in high-stakes settings. It discusses the challenges of incorporating workplace-based assessments into high-stakes evaluations due to issues with sampling, training, and measurement error. While there is limited psychometric evidence to support the use of workplace-based assessments in high-stakes contexts, jointly attesting to direct observation data may provide useful information to inform licensing decisions and identify gaps in assessment blueprints. The document advocates for more research on the reliability and validity of workplace-based assessments before incorporating local scores into high-stakes evaluations.
This document describes a study that evaluated the effects of using video screen captures of physicians in simulation-based learning experiences on associate degree nursing students' attitudes toward physician-nurse collaboration. Students completed a pre-test survey measuring their attitudes, then participated in a simulation using video of a physician. After debriefing, they completed the same survey as a post-test. Results found a statistically significant improvement in attitudes toward collaboration, indicating video captures can effectively simulate interactions with physicians without requiring live actors. The study demonstrated collaboration can be simulated with minimal technology investment.
Ovretveit implementation science research course 1day sept 11john
1. The document discusses a workshop on implementation science and research, which aims to explain what implementation science is, describe elements of an implementation program, explain strengths and limitations of implementation research studies, and plan an implementation study.
2. Implementation research mostly describes, evaluates and explains an implementation in different real-life settings. It involves assessing elements like content, structure, strategy, and methods using tools like CESSiM and REAIM.
3. Effective implementation is important for improving health outcomes and depends more on how interventions are implemented than just the intervention itself. Factors like context affect implementation success.
Achieving behaviour change for patient safety, Judith Dyson, Lecturer, Mental Health - University of Hull
Presentation from the Patient Safety Collaborative launch event held in London on 14 October 2014
More information at http://www.nhsiq.nhs.uk/improvement-programmes/patient-safety/patient-safety-collaboratives.aspx
Faster Improvement with Adaptive Implementation ResearchUCLA CTSI
Feb 3, 2016
Dr. John Ovretveit, Director of Research and Professor of Health Innovation and Evaluation at the Karolinska Institutet, presented as part of a seminar series on UCLA CTSI Dissemination, Improvement and Implementation Research.
This document discusses theory of change and its importance for evaluation. It begins by introducing theory of change and explaining that it is a process for exploring how change happens in a particular context. It then discusses building a theory of change by defining a program, its outcomes and intermediate steps, and identifying assumptions. The document explains that theory of change is important for evaluators to consider process and for programmers to be results-oriented. It also notes a common criticism is that theory of change can oversimplify programs.
This document summarizes a study that evaluated psychiatric nursing students' experiences in a simulated mental health ward prior to their clinical internship. The study aimed to understand students' perspectives on how the simulation prepared them for internship. Students completed a standardized evaluation scale after participating in simulations involving patient scenarios. Both quantitative and qualitative feedback was collected. Students reported that the simulation experience helped develop their clinical skills and critical thinking. They felt more prepared to care for patients. However, students also found the simulation experience anxiety-provoking. Suggestions for improving simulations included integrating them more frequently throughout the program to increase comfort and reducing the stressful environment.
This document summarizes a presentation on assessment and multiple choice questions (MCQs). It discusses how assessment guides student learning through the backwash effect. Formative assessment supports learning by allowing recurrent testing. Course leaders can influence learning by aligning teaching and examinations. The presentation then focuses on constructing high-quality MCQs, including considering different cognitive levels, question types, and crafting effective stems, options, and keys. Workshops are proposed to build MCQ databases, test questions, conduct talk-aloud protocols with students, and eventually digitalize the assessment process.
Here are 3 key questions the research group may want to consider:
1. Standardizing outcomes and assessments across sites while allowing for local flexibility. What core metrics and tools can be used consistently, while permitting some customization?
2. Developing a governance structure and processes that are inclusive yet efficient. How will decisions be made to balance needs of all members?
3. Determining authorship guidelines upfront to avoid future disputes. What qualifies someone as an author on multi-site studies?
The group should discuss these issues early to facilitate collaborative work and trust within the network. Clear policies can help maximize the benefits of this innovative research model.
'Demystifying Knowledge Transfer- an introduction to Implementation Science M...NEQOS
Powerpoint presentation from 'Demystifying Knowledge Transfer: an introduction to Implementation Science' - 28th May 2014.
Facilitated by Professor Jeremy Grimshaw and Dr Justin Presseau
This document outlines a dissertation that investigates whether un-moderated or moderated group participation is better for knowledge creation and convergence within an online community of practice (CoP). The study uses a mixed methods sequential explanatory design including a quasi-experimental pre-post test and qualitative interviews and content analysis. Results from the pre-post tests and qualitative analysis indicate that both moderated and un-moderated groups showed learning, but the moderated group performed better. The study suggests online collaboration can help build legal skills and minimize degraded judgments by facilitating knowledge convergence within the organization.
1) Competency-based medical education (CBME) is an outcomes-based approach that uses competencies as an organizing framework for designing, implementing, assessing, and evaluating medical education programs.
2) Traditional medical education focuses on knowledge acquisition with a fixed length and variable outcomes, while CBME emphasizes knowledge application with a variable length and defined outcomes.
3) Effective assessment in CBME uses a variety of objective measurement tools aligned with outcomes, incorporates direct observation and authentic tasks, and emphasizes formative assessment to drive future learning.
Competency-based assessment:The good, the bad, and the puzzlingMedCouncilCan
Three overlapping themes are discussed for effectively assessing competency:
1) Overcoming unintended consequences by reducing emphasis on exams as hurdles and promoting accountability for demonstration of learning. This involves quality improvement activities and using licensing data to facilitate learning plans.
2) Turning quality assurance into quality improvement by further integrating assessment across training with attention to improvement. This involves a formative testing platform and diagnostic assessments to feed data.
3) Ensuring authenticity by using portfolio-supported workplace assessments and increasing real world uncertainties in assessments. Examples include sequential OSCE stations and requiring reflection on alternative actions.
Technology to Personalize Learning for Gifted KidsBrian Housand
Brian Housand, Ph.D.
brianhousand.com
Since the dawn of the computer revolution, the promise of PERSONAL Computing has been ever present. Yet, when we simply leave gifted kids to their own devices, technology can serve to depersonalize their experiences. However, this need not be the case. Together, we will explore the possibilities and potential afforded by today’s technology and empower you to utilize technology resources to make learning personal and meaningful for today’s connected gifted students.
This document makes 40 predictions about the future of virtual reality (VR) and augmented reality (AR) technologies between 2016 and 2025. Some of the key predictions include: 1) VR headsets will be permanently in the consumer market starting in 2016; 2) By 2020 there will be around 1 billion PC/console gamers and 4 billion smartphone users; 3) By 2020 VR porn will be a $1 billion industry worldwide. The document predicts widespread adoption and integration of VR and AR technologies across gaming, entertainment, social platforms, and other industries over the next decade.
VR與AR技術於醫療領域的可能性及案例分析 (VR & AR Technologies in Medical Applications)宇軒 黃
【跨 X 創 產業小聚】#6
VR技術引爆超世代教學革新:從醫療與教育出發
VR and AR Technologies in Medical Applications
據美國研究機構報告,AR/VR的醫療健康市場到2020年將達25.4億美元,主要來自模擬訓練及康復治療。在未來的健康醫療教育融入虛擬及擴增實境應用中,又可以達到什麼境界?
Workshop — The Art of Writing Good Multiple-Choice Questions for High-Stakes ...MedCouncilCan
The document provides guidance on writing effective multiple choice questions for high-stakes medical exams, outlining key concepts like defining the purpose and concept being tested in each question, ensuring the stem provides sufficient relevant information to answer the question, and avoiding technical flaws like negative wording, logical cues, or word repeats that could inadvertently provide clues to the correct answer.
1) The document provides a tutorial on how to form an answerable clinical question using the PICO (TT) model. It explains the components of a well-built clinical question and how to identify the type of clinical question and best study design.
2) Several clinical scenarios are presented and the reader is asked to formulate each scenario as a PICO question, identify the question type, and recommended study design.
3) The document concludes by emphasizing that developing a clear clinical question using PICO helps efficiently find the best evidence to answer the question. It also provides information on additional education available on evidence-based care topics.
Learning Objectives
1. Identify strategies for Clinical Reasoning Strategies.
2. Identify the RIME Framework for Clinical Competency.
3. Identify how to facilitate Bedside Teaching (according to Cox Model).
- The document outlines the key principles of taking an effective patient history, including establishing rapport, using open-ended questions, and summarizing information.
- It recommends following models like the Calgary-Cambridge guide, which structures the consultation into initiating the session, gathering information, physical examination, explanation and planning, and closing the session.
- Effective history taking relies on active listening skills, maintaining the patient's privacy and comfort, and providing a safety net for next steps.
This document provides an overview of evidence-based practice (EBP) including its definition, importance, evolution, decision-making process, benefits, and misconceptions. It outlines a 5-step approach to EBP: formulating a question, finding evidence, appraising evidence, applying to practice while considering patient values, and evaluating effectiveness. Various resources and levels of evidence are also defined to help practitioners implement EBP and provide the highest quality, cost-effective care.
The document discusses various types of assessment instruments used in classrooms including placement, screening, formative, and summative assessments. It also covers topics like Bloom's taxonomy, Miller's pyramid, validity, reliability, feasibility, and utility of assessments. Specific assessment types discussed in more detail include multiple choice questions, modified essay questions, and patient management problems. Key aspects like construction, measurement abilities, and uses of each type are provided.
240119-Evidence Based Medicine nnnnn.pptxMyThaoAiDoan
The document provides an overview of evidence-based medicine (EBM) and its five-step process. It defines EBM as the integration of best research evidence with clinical expertise and patient values. The five steps of EBM are: 1) defining the clinical problem, 2) finding evidence, 3) appraising the evidence, 4) applying to patient care, and 5) evaluating the application. Key points include using the PICO framework to build clinical questions, considering the type of evidence needed based on the question, and searching reliable sources of pre-appraised evidence like Cochrane reviews. An example shows how to apply these concepts to a patient scenario.
Chapter 2
Study Designs
Learning Objectives
• List and define the components of a good
study design
• Compare and contrast observational and
experimental study designs
• Summarize the advantages and disadvantages
of alternative study designs
Learning Objectives
• Describe the key features of a randomized
controlled trial
• Identify the study designs used in public health
and medical studies
Study Designs
• Observational Studies
– Case-series study
– Cross-sectional (prevalence) survey
– Case-control study
– Cohort study
• Experimental Studies
– Randomized Controlled (Clinical) Trial
Inferences
• Observational studies – inferences limited to descriptions
and associations; with carefully designed analysis can
make stronger inferences (statistical adjustment)
• Experimental studies – cause and effect
In ALL studies – need careful definition of disease
(outcome) and exposure (risk factor)
Which Design is Best
• Depends on the study question
• What is current knowledge on topic
• How common is disease (and risk factors)
• How long would study take, what are costs
• Ethical issues
Case Report/Case Series
• Observational study
• Case report: Detailed report of specific
features of case
• Case series: Systematic review of common
features of a small number of cases
• Advantage: Cost-efficient
• Disadvantages: No comparison group, no
specific research question
Case-Series
• Simplest design – description of interesting
observations in a small number of individuals
• Usually case-series do not involve control patients
(i.e., patients free of disease)
• Usually lead to generation of hypotheses for more
formal testing
• Criticisms: not planned – no research hypotheses
Case-Series
• Gottleib (1981) studied 5 young homosexual
men with rare form of pneumonia and other
unusual infections
• Initial report was followed by more series (26
cases in NY and CA; “cluster” in southern CA;
34 cases among Haitians, etc.)
• Condition termed AIDS in 1982
Cross-Sectional Survey
• Observational study conducted at a point in
time
• Advantages: Cost-efficient, easy to implement,
ethical
• Disadvantages: No temporal information, non-
response bias
Cross-Sectional Survey
• Is there an association between diabetes and
cardiovascular disease (CVD)?
Patients
with
Diabetes
Patients without
Diabetes
Patients with
CVD
Prospective Cohort Study
• Observational study involving a group (cohort)
of individuals who meet inclusion criteria
followed prospectively in time for risk factor
and outcome information
• Advantages: Can assess temporal relationships
• Disadvantages: Need large numbers for rare
outcomes, confounding
Cohort Study
• Is there an association between hypertension and
cardiovascular disease?
CVD
Hypertension
No CVD
Cohort
CVD
No Hypertension
No CVD
Study Start Time
Cohort Studies
• Identify a group of individuals that meet
inclusion crit ...
How to form a clinical question. cincinnati childrensCatherineMiller2
This document provides a tutorial on how to form an answerable clinical question in 5 steps: 1) Ask, 2) Acquire, 3) Appraise, 4) Apply, 5) Assess. It discusses using the PICO (Patient, Intervention, Comparison, Outcome) model to develop a well-built clinical question and identifies the type of clinical question and best study design. Clinical scenarios are presented and answered in PICO format to demonstrate how to apply this process. Additional training opportunities in evidence-based care are listed.
Evidence-based applicability in clinical settingElhadi Miskeen
This document discusses the concept and application of evidence-based medicine (EBM). It begins by defining EBM as the integration of best research evidence, clinical expertise, and patient values. It then outlines the five steps of EBM: 1) formulating an answerable clinical question, 2) finding relevant evidence, 3) appraising the evidence critically, 4) applying the evidence to practice, and 5) evaluating performance. The document provides examples of formulating questions in PICO format and searching strategies. It also discusses study designs and hierarchies of evidence, emphasizing that randomized controlled trials provide the strongest evidence when evaluating interventions. The goal of EBM is to improve healthcare quality by incorporating valid and applicable research findings.
This document discusses how to formulate clinical questions to guide searches of the medical literature. It introduces the PICO framework for structuring questions around patients, interventions, comparisons, and outcomes. Five common types of clinical questions are identified: therapy, harm, differential diagnosis, diagnosis, and prognosis. Each question type lends itself to different study designs that provide the best evidence. Examples are provided to demonstrate how unstructured questions can be clarified using PICO. The goal is to construct answerable clinical questions that facilitate efficient literature searches.
RSS 2012 Developing Research Idea and QuestionWesam Abuznadah
This document discusses the research process and how to formulate answerable research questions. It explains that the research process begins with identifying a knowledge gap and transforming it into a clear research question. It also discusses where research questions come from and how to define a good question based on importance, interest, and answerability. The document provides guidance on formulating an answerable PICO (population, intervention, comparator, outcome) question and determining the best feasible study type to answer the question. Common study types discussed include observational studies like cross-sectional, cohort and case-control studies, as well as experimental intervention studies.
How To Read A Medical Paper: Part 2, Assessing the Methodological QualityDrLukeKane
This document outlines five essential questions to ask when assessing the methodological quality of papers: 1) Was the study original? 2) Whom is the study about? 3) Was the design of the study sensible? 4) Was systematic bias avoided or minimized? 5) Was the study large enough and long enough to make the results credible? It discusses factors to consider for each question when evaluating a study's methods section such as sample size, duration of follow up, and completeness of follow up.
Concise explaining of Evidence-Based Medicine and discussing the following: 1-What is Evidence-Based Medicine?
2-Why Evidence-based Medicine?
3-Options for changing clinicians' practice behaviour
4- EBM Process- Five Steps
5-Seven alternatives to evidence-based medicine
This document discusses key concepts in clinical research and scientific inquiry. It defines research as a systematic investigation designed to contribute to generalizable knowledge. The anatomy of a research project includes a research question, background and significance, study design, subjects, variables, and statistical issues. A good research question should be feasible, interesting, novel, ethical, and relevant. The physiology of research relates to internal and external validity and minimizing random and systematic errors.
This document discusses evidence-based medicine (EBM) and outlines the key steps in applying an EBM approach to clinical practice. It begins by contrasting the old paradigm of unsystematic clinical experience with the new EBM paradigm. The 5 key steps of EBM are then summarized as: 1) developing answerable clinical questions, 2) searching for and obtaining relevant evidence, 3) critically appraising the evidence, 4) using clinical expertise and patient preferences to apply evidence to individual patients, and 5) evaluating outcomes. The document provides guidance on formulating answerable clinical questions, identifying appropriate study designs, critically appraising evidence, and integrating evidence with expertise and patient values in clinical decision making.
TEST BANK For Critical Thinking, Clinical Reasoning, and Clinical Judgment A ...robinsonayot
TEST BANK For Critical Thinking, Clinical Reasoning, and Clinical Judgment A Practical Approach 7th Edition by Rosalinda Alfaro-LeFevre, Verified Chapters 1 - 7, Complete Newest Version.pdf
TEST BANK For Critical Thinking, Clinical Reasoning, and Clinical Judgment A Practical Approach 7th Edition by Rosalinda Alfaro-LeFevre, Verified Chapters 1 - 7, Complete Newest Version.pdf
Tests of Knowledge: How Can They Contribute to Maintenance of Certification ...IAMRAreval2015
Tests of knowledge can contribute to maintenance of certification (MoC) and revalidation in two ways: assessment of learning and assessment for learning. Assessment of learning uses high-stakes knowledge tests summatively to identify doctors with performance problems, while assessment for learning takes a longitudinal approach through frequent, low-stakes tests to provide feedback and assist doctors in keeping knowledge up to date. There are challenges to the use of knowledge tests, such as determining relevant content and preventing tests from being seen as irrelevant facts, that different assessment approaches aim to address.
To understand why a study abstract is important to scientific communication.
To understand the process by which abstracts are selected for presentation at scientific conferences.
To learn the features which unite successful abstract submissions.
Peering through the Looking Glass: Towards a Programmatic View of the Qualify...MedCouncilCan
André De Champlain presented on developing a programmatic view of the MCC Qualifying Examination. Key points include:
1) The Assessment Review Task Force recommended validating and updating the blueprint for MCC examinations and exploring a more integrated, continuous model of assessment along the physician's educational continuum.
2) A proposed Medical Education Assessment Advisory Committee would provide guidance on incorporating authentic, linked assessments throughout training and practice.
3) Validating a program of assessment would require evaluating the reliability of individual elements as well as the entire program, and gathering multiple types of evidence to support the validity of score interpretations.
Pushing the Boundaries of Medical Licensing MedCouncilCan
The document summarizes a presentation given at the Medical Council of Canada's 103rd Annual Meeting about pushing the boundaries of medical licensing examinations by applying a programmatic framework. The presentation discusses gaps between the current MCC exams (Part I and Part II) and the new competency blueprint, and proposes a model for a national programmatic approach to assessment. This would involve filling gaps with other assessments like workplace-based evaluations, reflections, and multi-source feedback from medical school, and linking various assessments along the continuum of undergraduate medical education, postgraduate medical education, and practice to inform licensure decisions. Speakers will discuss the advantages and challenges of adopting this broader programmatic assessment approach beyond the current two high-stakes licensing exams.
Peeking behind the test: insights and innovations from the Medical Council of...MedCouncilCan
2015 CCME
MCC Business Session
Peeking behind the test: insights and innovations from the Medical Council of Canada. We will showcase new technological innovations such as the automated item generation, automated scoring and the MCC’s new item bank MOC5.
The document provides information and instructions for candidates taking the National Assessment Collaboration exam. It discusses the structure and content of the exam, which assesses entry-level competence in medicine and lasts about 3 hours. Candidates are not permitted to discuss exam content due to confidentiality agreements. The summary also outlines what candidates can bring, the check-in process, and expectations during the exam, which involves interactions with standardized patients at different clinical stations.
The New Blueprint: challenging the comfort zoneMedCouncilCan
This document outlines the progress of the Blueprint Project since 2013. It proposes common frameworks to assess physicians for high-stakes decision making at two points: entry into residency and independent practice. Stakeholder consultations provided feedback on proposed assessment dimensions and definitions. Gap analyses found MCC exams currently underrepresent chronic illness and psychosocial aspects. Future work includes developing additional assessments through opportunities like e-portfolios and item banks to fully address the blueprint. Workshops will discuss including various assessments in an e-portfolio and sharing assessment data between organizations. The project aims to ensure physicians are qualified for practice through a rigorous yet evolving assessment system.
This document outlines the progress of the Blueprint Project since 2013. It proposes common frameworks to assess physicians for high-stakes decision making at two points: entry into residency and independent practice. Stakeholder consultations provided feedback on proposed assessment dimensions and definitions. Gap analyses found MCC exams currently underrepresent chronic illness and psychosocial aspects. Future work includes developing additional assessments through opportunities like e-portfolios and item banks to fully address the blueprint. Workshops will discuss including various assessments in an e-portfolio and sharing assessment data between organizations. The project aims to ensure physicians are qualified for practice through a rigorous yet evolving assessment system.
This document provides information and instructions for candidates taking the National Assessment Collaboration (NAC) examination. The NAC examination evaluates candidates' clinical competence and knowledge in various medical specialties. Candidates must maintain strict confidentiality of all exam content and are prohibited from discussing exam questions or cases. The exam duration is approximately 3 hours, with an additional sequestering period before and after. Candidates should bring only a reflex hammer, stethoscope, and lab coat. All personal items will be stored in a limited coat check area.
NAC PRA update - 2014 Ottawa ConferenceMedCouncilCan
This document outlines the Pan-Canadian Practice Ready Assessment for IMG Physicians, which aims to establish competency-based standards for provisional licensure in family medicine. It discusses the background and challenges with integrating International Medical Graduates. The Practice Ready Assessment is presented as a process using Miller's pyramid to assess clinical competence through point-in-time and over-time evaluations. Standards are developed in collaboration with various stakeholders and focus on what candidates can do rather than how assessments are implemented. Assessments occur in practice environments over 12 weeks using multi-source feedback from patients and colleagues to determine practice readiness.
This document outlines the process used to develop a defensible blueprint for physician licensing assessments. It involved gathering information from various sources, including a literature review, surveys of physicians and the public, and a 3-day meeting with 12 subject matter experts. The experts developed a proposed common blueprint with dimensions of care (e.g. acute, chronic) and physician activities (e.g. assessment, management). The blueprint defines the content and weighting for assessments leading to two decision points: entry into residency and independent practice. Developing a defensible blueprint requires defining the assessment purpose, choosing appropriate information sources, and including stakeholder judgments.
2014 Candidate Orientation Presentation - Certification Examination in Family...MedCouncilCan
The OSCE portion of the Certification Examination in family medicine consists of 8 patient encounter stations and 2 rest stations on Saturday. Candidates are given instructions before each station and have 10 minutes to complete the task, which may include a physical exam, history taking, or patient management. Standardized patients and examiners evaluate the candidates' performance. Candidates must bring only approved materials and remain on site for sequestering before and after the exam.
The document provides information and instructions for candidates taking the National Assessment Collaboration Examination. It outlines that the exam assesses clinical competence through problems in various medical disciplines. It stresses confidentiality of exam materials and details logistics like the exam duration, items allowed and prohibited, and physical examination procedures. Candidates are guided on navigation of exam stations and interactions with examiners and standardized patients.
Pre-Exam Orientation for Candidates taking the Certification Examination in F...MedCouncilCan
The document provides an overview of the OSCE (objective structured clinical examination) portion of the Certification Examination in Family Medicine. It describes the structure and format of the 12-station clinical skills exam, including the timing and tasks involved in each station. Candidates are given guidance on logistics like registration, the use of bar code labels and notebooks, interactions with examiners and standardized patients, and confidentiality expectations after the exam.
The document provides an orientation for candidates taking the Medical Council of Canada Qualifying Examination Part II. It outlines the purpose and structure of the two-day exam, which consists of patient encounters and written questions. Candidates are instructed on logistics like what to bring, registration procedures, the timing of stations, and how interactions with examiners and standardized patients will be evaluated during the exam.
Practice Ready Assessment for IMG PhysiciansMedCouncilCan
1. The document discusses the development of standards for assessing international medical graduates (IMGs) seeking provisional licensure through a Practice Ready Assessment (PRA) in Canada.
2. It outlines accomplishments over the past year in establishing competency-based standards for assessing family medicine physicians through a PRA.
3. Next steps discussed include developing standards for assessing psychiatry and internal medicine physicians, as well as ensuring the long-term sustainability and comparability of the PRA process across Canada.
The document outlines the process undertaken by the Blueprint Project Team to define a new blueprint and test specifications for the Medical Council of Canada (MCC) examinations. Key aspects of the process included consultation with subject matter experts, review of reports on current issues in healthcare, and a national survey of physicians, pharmacists, nurses and the public. Based on this information, the team proposed a common blueprint with dimensions of care (e.g. acute, chronic, psychosocial) and physician activities (e.g. assessment, management, communication) to assess core competencies across two decision points - entry into supervised practice and unsupervised practice. The team engaged in consultation with stakeholders to gather feedback on the proposed blueprint and next steps.
1. The Development of Multiple-
Choice Questions using the
Key-features Approach
Claire Touchie, MD, FRCPC
University of Ottawa
Medical Council of Canada
3. Workshop Agenda
1. Introductions (10 min)
2. What is the key-features approach (20 min)
3. Exercise 1: Defining key-features (15 min)
4. How to write multiple choice items (25 min)
5. Exercise 2: Writing multiple-choice questions (30 min)
6. Leg Stretch (10 min)
7. Large group discussion: Review of MCQs (30 min)
8. Exercise 3: Technical flaws (15 min)
9. What are technical flaws (15 min)
10. Wrap up and evaluations
3
4. Workshop Objectives
• Describe what can be tested with multiple-
choice items
• Define the anatomy of a multiple-choice item
• Define and identify technical flaws
• Create multiple-choice items for own stated
purpose
• Define and criticize poor performing items
4
5. Why are we doing this?
Which one of the following is true about pseudogout?
1. It occurs frequently in women.
2. Seldom associated with acute pain in a joint
3. May be associated with a finding of
chondrocalcinosis.
4. It is hereditary in all cases
5. It responds well to treatment with allopurinol
6. Why are we doing this?
A 62 year-old woman with a history of confusion and constipation comes to
the office for a follow-up visit. Laboratory investigations reveal a serum
calcium of 2.9mmol/L, a creatinine of 146 µmol/L, and a hemoglobin of 108
g/L.
Which one of the following is the most likely diagnosis?
1. Hyperparathyroidism
2. Chronic renal failure
3. Multiple myeloma*
4. Vitamin D intoxication
5. Renal cell carcinoma
7. Key Feature Problems
•Based on the concept of “Case Specificity”
– The clinical performance on one problem is NOT a good
predictor for performance on other problems
•Assessment is best served when focusing
exclusively on the unique challenges (key
features) in the resolution of each problem
– Essential issues or specific difficulty
8. Key Feature Problems
•First discussed at Cambridge Conference – 1984
•Developed by Georges Bordage and Gordon Page for
the MCC
•First incorporated into the MCCQE Part I – 1992
•Known under different names
– Q4, Clinical Reasoning Skills (CRS), Clinical Decision
Making (CDM)
9. So what? Who cares?
•Studies show validity evidence and reliability in
testing settings
•Wenghofer et al. (Med Educ 2009)
– Candidates in the bottom quartile had a 3-fold increase in
the risk of an unacceptable quality-of-care assessment
outcome (OR 3.41)
10. Key Feature Problems
Assesses decision-making skills NOT recall
of factual information
Knowledge
Clinical Decision
Application of knowledge
11. Application of Knowledge
•To elicit clinical clues
•To formulate diagnostic impressions
•To order investigative or f/u procedures
•To acquire data to monitor a course of action
OR evaluate the severity/probability of an
outcome
•To select a management course
12. Example of Knowledge question
•Which of the following are characteristic
of delirium?
13. Alternate type of question
Should assess the ability to:
– Recognize delirium tremens in a specific patient
• An example of a “clinical reasoning” issue
– Prescribe appropriate therapeutic measures
• An example of a “clinical decision” issue
14. Key Feature – Definition
•A critical or essential step in the resolution of a
problem
•A step in which examinees are most likely to
make errors in the resolution of a problem
•A difficult or challenging aspect in the
identification and management of the problem in
practice
15. Advantages of the Key Feature Approach
•More discriminating
•Shifts the emphasis from
– The method of assessment to the object of
assessment
– Assessing all aspects of solving a problem to
assessing only the essential element
16. Key Question of Key Features
“What are the critical, essential elements in
the resolution of the problem?”
17. Process in Key Feature Development
•Problem definition
•Selecting a key feature
•Developing a clinical scenario
•Identifying the correct answer (in the case of
single correct MCQ)
•Identifying plausible distractors
19. Problem Definition
Select an objective or a clinical
problem…delirium/confusion
Select a clinical situation
– Undifferentiated complaint
– A single typical problem*
– A multi-system problem
– A life-threatening event*
– Preventive care and health promotion
20. Selecting a Key Feature
•Ask the question
– What are the critical essential elements in the resolution of
the problem?
•Key feature 1
– Given a patient with post-operative delirium, ask about
EtOH consumption
•Key feature 2
– Given a patient with post-operative delirium, recognize
delirium tremens and manage with benzodiazepines
22. How to write multiple-choice items
1. The What
2. The How
22
23. The What
• What can I test with MCQs?
• Knowledge
• Clinical-decision making
• Clearly define the purpose of your exam
• Define what it is you want to test
• For the overall test
• For your specific question
23
24. The What
Prior to writing your question, ask the
following questions
• What concept do I want to test?
• Where does the learner go wrong?
• Focus on areas of “challenge” for the learner
24
25. Example of the What
Purpose: To assess the clinical clerk’s
knowledge and decision making capability
at the end of an Internal Medicine six week
rotation
• Concept/Objective: Management of
CHF
• Challenge to the learner: ???
25
26. The How – Anatomy of a MC item
• Stem
• Clinical vignette which describes the setting, the
patient’s age and complaint along with pertinent
historical facts, physical exam details, and/or
laboratory findings
• Lead-in question
• The task
• Answer and alternatives
• The most correct answer and the plausible distractors
26
27. The How – Anatomy of a MC item
Stem
• A 58-year old man presents to the ED with
sudden onset of left-sided chest pain
associated with shortness of
breath, palpitations and dizziness. His past
history is relevant for a recent diagnosis of
lung carcinoma. His examination is only
remarkable for a heart rate of 112/minute.
27
28. The How-Anatomy of a MC item
Lead-in question
• Which one of the following diagnostic test
would be most useful to confirm the
diagnosis?
28
29. The How – Anatomy of a MC item
Correct/Best answer and distractors
1. Chest radiograph
2. CT of the chest *
3. Sputum culture
4. Electrocardiogram
5. Echocardiogram
29
30. What did you notice about this question?
Could you answer it without seeing the
alternatives?
Could you answer without being given the
diagnosis?
30
31. The Stem
• Short description of a clinical scenario
• Common or clinically important
• Clear and contains relevant information to the
clinical problem – avoid window-dressing
• Word the stem positively
• Avoid EXCEPT questions
• Use negative words with caution
• Eg: contraindication, what to avoid
31
32. The Stem
• Provide sufficient information to answer
the item
• DO NOT create tricky items by omitting
essential information
• DO NOT add extraneous information
• Stem should be a clinical vignette
32
33. Lead-in Question
• Ensure the directions are very clear with
a clear task
• Can the stem be administered in a short answer
(constructed-response) format?
• “Cover answer test”
34. Lead-in Question
• Different clinical tasks can be tested
• Can be done with the same stem (cloning of
question)
• History
• Diagnosis
• Investigations
• Management/Treatment/Drug therapy
• Counseling
35. Lead-in Question
Try asking questions that lead to clinical decision-
making?
Which one of the following
– … is the most likely diagnosis?
– … investigations would you now order?
– …is the next step in the work-up of this patient?
– … is the most important step in the initial management of
this patient?
36. Distractors
• Only one right choice with distractors
• Number of distractors is a policy decision (3? vs. 4?)
• Use plausible distractor choices
• Keep distractors independent, they should not be overlapping
• E.g.: 1. 11-20; 2. 15-30
• Keep distractors homogeneous in content and grammatical
structure
• Keep the length about equal
• Avoid specific determiners such as
All, Never, Always, Completely and Absolutely
• Do not use All of the Above or None of the Above
37. Item testing clinical decision-making
A 62 year-old woman with a history of confusion and constipation comes to
the office for a follow-up visit. Laboratory investigations reveal a serum
calcium of 2.9mmol/L, a creatinine of 146 µmol/L, and a hemoglobin of 108
g/L.
Which one of the following is the most likely diagnosis?
1. Hyperparathyroidism
2. Chronic renal failure
3. Multiple myeloma*
4. Vitamin D intoxication
5. Renal cell carcinoma
38. Item testing clinical decision-making
A 62 year-old woman with a history of confusion and constipation
comes to the office for a follow-up visit. Laboratory investigations
reveal a serum calcium of 2.9mmol/L, a creatinine of 146 µmol/L, and
a hemoglobin of 108 g/L.
Which one of the following would help confirm the diagnosis?
1. Parathyroid hormone
2. Serum protein electrophoresis*
3. 25-OH vitamin D
4. Serum creatinine
5. Abdominal ultrasound
39. What is wrong with this item?
A previously healthy person suddenly presents with
pleuritic pain in the left chest and shortness of breath.
Which one of the following is the most likely
diagnosis?
1. Mycoplasma pneumonia
2. Spontaneous pneumothorax
3. Pulmonary embolism
4. Acute pericarditis
5. Epidemic pleurodynia
40. What is wrong with this item?
Which one of the following is true about pseudogout?
1. It occurs frequently in women.
2. Seldom associated with acute pain in a joint
3. May be associated with a finding of
chondrocalcinosis.
4. It is hereditary in all cases
5. It responds well to treatment with allopurinol
40
41. What is wrong with this item?
Aortic insufficiency may be caused by all of the
following, EXCEPT:
1. syphilis
2. Marfan’s syndrome
3. aortic dissection
4. bacterial endocarditis
5. myocardial infarction*
42. Exercise #2 – 30 minutes
Writing MCQ items
• Pair up with a partner
• Using the MCQ item development
worksheet, develop MCQ items that will be
useful to you
42
44. Exercise #2 – Pre-test
A Test of General Rock and Roll
Knowledge
44
45. Technical Flaws
Violations of test item writing principles
• Flawed items are usually more difficult
• Fail more students
Downing, 2002
45
46. Technical Flaws
• Unfocused items
• Negative stem or lead-in question
• Heterogeneous options
• Logical or grammatical cues
• Long correct answer
• Word repeats
• Convergence strategy
46
47. Unfocused item
Which one of the following is true about pseudogout?
1. It occurs frequently in women.
2. Seldom associated with acute pain in a joint
3. May be associated with a finding of
chondrocalcinosis.
4. It is hereditary in all cases
5. It responds well to treatment with allopurinol
47
48. Negative stem or lead-in question
Which of the following does not cause aortic
insufficiency?
1. syphilis
2. Marfan’s syndrome
3. aortic dissection
4. bacterial endocarditis
5. myocardial infarction*
48
49. Heterogeneous option
A 24-year-old female presents to a walk-in clinic with
fever, flank pain, frequency and dysuria. The urinalysis (urine
microscopy) shows 1+proteinuria, 25 white blood cells per high
power field and a few granular casts.
Which one of the following investigations is the next best step?
1. Intravenous pyelography.
2. Intravenous antibiotics.
3. Creatinine clearance.
4. Midstream urine culture.*
5. Oral analgesia.
49
50. Logical cues
A 47-year old man present with an acute episode of
psychosis. Which one of the following treatment would
you consider prescribing?
1. Alprazolam
2. Lorazepam
3. Haloperidol*
4. Diazepam
5. Quetiapine
50
51. Grammatical cues
A 78-year old man undergoes a thoracentesis for a large
pleural effusion. Three hours later, he develops sudden
onset of shortness of breath. What is your most likely
diagnosis?
1. Reaccumulation of fluid
2. Pneumothorax*
3. Lung infection
4. Bleeding
5. Blood clot
51
52. Word repeats
Also known as “clang association”:
A 45-year-old woman presents with sudden loss of consciousness. On
exam, her vitals are normal, she is not pale and she is not diaphoretic.
Which one of the following is more typical of “fainting” as a conversion
symptom than of a syncopal attack due to orthostatic hypotension?
1. Bradycardia.
2. Muscle twitching.
3. Absence of pallor and sweating.*
4. Urinary incontinence.
5. Rapid recovery.
52
53. Convergence strategy
An 86-year-old woman fell at the local nursing home and
sustained an intertrochanteric fracture of her left hip. On clinical
examination, you would expect to find her left leg:
1. Shortened, abducted and internally rotated.
2. Lengthened, abducted and internally rotated.
3. Shortened, adducted and externally rotated.
4. Shortened, abducted and externally rotated.*
5. Lengthened, abducted and externally rotated.
53
55. Now review the items you have written
using the checklist!
55
56. In Summary
Prior to writing MCQ items:
• Determine the purpose of the test
• Determine WHAT you want to test
• Use key-features to help you develop your
questions
Write questions avoiding technical flaws
56
57. References
• Guidelines for the Development of Multiple-Choice
Questions at http://www.mcc.ca/pdf/MCQ_Guidelines_e.pdf
• Haladyna TM, Downing SM, Rodriguez MC. A Review of
Multiple-Choice Item-Writing Guidelines for Classroom
Assessment. Applied Measurement in Education
2002;15:309-334
57