Your SlideShare is downloading. ×
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
INSPIRE 2014 Updates (San Francisco, CA)
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

INSPIRE 2014 Updates (San Francisco, CA)

3,376

Published on

The International Network for Simulation-based Pediatric Innovation, Research, and Education is a rapidly growing, open research network designed to connect and mentor experts and novices across the …

The International Network for Simulation-based Pediatric Innovation, Research, and Education is a rapidly growing, open research network designed to connect and mentor experts and novices across the world in answering important questions on pediatric care through the use of simulation.

Published in: Education, Technology, Business
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
3,376
On Slideshare
0
From Embeds
0
Number of Embeds
6
Actions
Shares
0
Downloads
7
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. INSPIRE @ IMSH Network Update 2013-2014   Marc Auerbach/Adam Cheng January 25, 2014 San Francisco, California, USA Interna&onal  Network  for  Simula&on-­‐based  Pediatric  Innova&on,  Research  and  Educa&on  
  • 2. Schedule   1730 - 1800 Network Updates 1800 – 1820 Website Tour - Chang 1820 - 1850 Research Design - Kessler 1850 - 1920 Education Templates - Adler 1920 – 1945 Future Directions- Rapid Report Outs 1945 - 2045 Open Group Meeting- Auerbach/Chang 2045 – 2100 Feedback / Discussion - Nadkarni, MacKinnon Interna&onal  Network  for  Simula&on-­‐based  Pediatric  Innova&on,  Research  and  Educa&on  
  • 3. Who  are  we?   Interna&onal  Network  for  Simula&on-­‐based  Pediatric  Innova&on,  Research  and  Educa&on  
  • 4. Growth   Sites 180 160 140 120 100 80 60 40 20 0 2011 2012 2013 2014 Interna&onal  Network  for  Simula&on-­‐based  Pediatric  Innova&on,  Research  and  Educa&on  
  • 5. Growth   Members 600 500 400 300 200 100 0 2011 2012 2013 2014 Interna&onal  Network  for  Simula&on-­‐based  Pediatric  Innova&on,  Research  and  Educa&on  
  • 6. Value   Interna&onal  Network  for  Simula&on-­‐based  Pediatric  Innova&on,  Research  and  Educa&on  
  • 7. Leadership   Interna&onal  Network  for  Simula&on-­‐based  Pediatric  Innova&on,  Research  and  Educa&on  
  • 8. Mission   We aim to improve the delivery of medical care to acutely ill children by answering important research questions pertaining to resuscitation, technical skills, behavioral skills, debriefing and simulation-based education Interna&onal  Network  for  Simula&on-­‐based  Pediatric  Innova&on,  Research  and  Educa&on  
  • 9. What  are  we?   •  Vision –  Answering important questions –  Pillars of research •  Building programs of simulation research –  Sharing resources •  Bringing down walls between institutions Interna&onal  Network  for  Simula&on-­‐based  Pediatric  Innova&on,  Research  and  Educa&on  
  • 10. Consensus  on  simula&on  research  priori&es     Merlin  exercise  (2012),  Consensus  (2013)   Research Themes Interna&onal  Network  for  Simula&on-­‐based  Pediatric  Innova&on,  Research  and  Educa&on  
  • 11. Why  Themes?   Interna&onal  Network  for  Simula&on-­‐based  Pediatric  Innova&on,  Research  and  Educa&on  
  • 12. Why  Themes?   Interna&onal  Network  for  Simula&on-­‐based  Pediatric  Innova&on,  Research  and  Educa&on  
  • 13. INSPIRE  Research  Themes   TRAINING AND ASSESSMENT Debriefing Develop/assess/implement effective techniques for debriefing real/sim events IPE, Teamwork, Communication Procedural, Psychomotor Skills Develop/assess/implement effective techniques for team training Develop/assess/implement effective techniques for skills development retention Technology Acute Care and Resuscitation Human Factors Patient Safety HEALTH CARE INNOVATIONS Develop/assess/implement novel technologies designed to improve processes of care and pediatric patient outcomes Develop/assess/implement novel techniques for improving care of pediatric patients Assess the role of human factors when providing care to pediatric patients Explore the key variables that influence patient safety and assess strategies to mitigate Interna&onal  Network  for  Simula&on-­‐based  Pediatric  Innova&on,  Research  and  Educa&on  
  • 14. Current  INSPIRE  Projects   TRAINING AND ASSESSMENT Debriefing IPE, Teamwork, Communication Procedural, Psychomotor Skills * new projects •  Cheng: Co-Debriefing in Simulation-based Education* •  Halamek: DART- Debriefing Assessment •  Knight: Improving Code Team Performance and Survival Outcomes: Implementation of Pediatric Composite Resuscitation Training* •  Hunt/Rosen: Team Leadership Under Stress •  Overly: Structured-patient encounter •  Tensing Maa- PALS performance tool •  Pusic: Learning Retention/Refreshers After DP of Radiograph Interpretation* •  Dadiz: Exploring Facilitators/Barriers to Implementing Competency Assessments* •  Arnold: Simulation to teach management of tracheostomy emergencies * •  White M. Development of a Standardized Process for INSPIRE Procedure Kits* •  Byrne: Comparison of ETI + UVC vs. LMA + IO Needle in NRP* •  Mehta: The effect of Simulation to determine Frequency for Competency Skill Training* •  Smith: Pediatric Simulation and the Milestones* •  Sawyer: Neonatal Intuabation •  Chang: Train-the-trainer LP, Script Concordance LP •  Brown: PRIDE Disaster Triage •  Barry: BVM training •  Kummett: Neonatal Skills HEALTH CARE INNOVATIONS Technology •  Kessler: Randomized Trial of Continuous Capnography During Simulated Arrests* •  Burhop: The Difficult Pediatric Airway: A Simulation study examining the Efficacy of Videolaryngoscopy in Trisomy 21* •  Gee: Hybrid-simulator Acute Care and Resuscitation •  •  •  •  •  •  •  •  •  Human Factors Patient Safety Lemke: Rapid Cycle Deliberate Practice for Resuscitation Teams* Meyer: Donation after Circulatory Death* Auerbach: GED-PED Disparities McKinnon: Critical Neurotrauma Sim Mehta: Health literacy Levy: PALS tool validation Sens: Handoff Assessment Fiedor-Hamilton: EpiPen Sherzer: Epi pen community
  • 15. What  do  we  provide?   Interna&onal  Network  for  Simula&on-­‐based  Pediatric  Innova&on,  Research  and  Educa&on  
  • 16. Research  Process   Young Investigator with Research Idea Systematic Review or Needs Assessment Pilot Study Multicenter Study Knowledge Translation • Online Research Series • Senior INSPIRE mentor (via online mentor match) to help with establishing research goals and development of 1 page “specific aims” page • INSPIRE Research Coordinator to assist with methodology for systematic review • INSPIRE Librarian to assist with literature search Publication • Review and revise study protocol with INSPIRE mentor • Review study protocol with INSPIRE technology director to discuss possible tech-assisted outcome measures • Review with INSPIRE statistical consultant to solidify analysis plan, feasibility, and power analysis Publication • INSPIRE scientific committee to review protocol and grant proposal • INSPIRE website to assist in finding collaborators and recruitment sites • INSPIRE research portal for data collection • Data analysis and submission to Manuscript Oversight Committee (MOC) • INSPIRE research assistant and graphic designer to assist with poster preparation • INSPIRE writing group and scientific committee to assist with review of manuscripts and mitigation of authorship issues and byline • Submission of manuscript for peer review, amend with mentor and writing group, publish Publication Interna&onal  Network  for  Simula&on-­‐based  Pediatric  Innova&on,  Research  and  Educa&on  
  • 17. Study  Protocol  Submission   Online submission (http:// www.INSPIRESim.com/) Study protocol Research Design Committee feedback Any grant proposal Executive Oversight Committee feedback Invitation to present at IMSH or IPSSW 4 weeks Interna&onal  Network  for  Simula&on-­‐based  Pediatric  Innova&on,  Research  and  Educa&on  
  • 18. Study  Protocol  Submission   Research Design Committee feedback Grant proposal with 0.1 FTE Support Executive Oversight Committee feedback Continued protocol revisions Invitation to present at IMSH or IPSSW Technology Committee feedback Ongoing In-person presentation IMSH or IPSSW Interna&onal  Network  for  Simula&on-­‐based  Pediatric  Innova&on,  Research  and  Educa&on  
  • 19. Study  Protocol  Submission   Designated INSPIRE liaison Research Portal Access Grant proposal with 0.1 FTE Support Logistical Support Timeline for Completion Expert Access Two-way Contract In-person presentation Other INSPIRE Site Recruitment Biannual updates < 6 weeks Ongoing Interna&onal  Network  for  Simula&on-­‐based  Pediatric  Innova&on,  Research  and  Educa&on  
  • 20. Study  Protocol  Submission   Designated INSPIRE liaison Platforms Manuscript Oversight Committee feedback Posters Authorship Plan Biannual updates Biannual updates Manuscript Submission INSPIRE Acknowledgment 1 – 4 months prior to final data collection 1 year (or less depending on Timeline) Interna&onal  Network  for  Simula&on-­‐based  Pediatric  Innova&on,  Research  and  Educa&on  
  • 21. Mentorship   Interna&onal  Network  for  Simula&on-­‐based  Pediatric  Innova&on,  Research  and  Educa&on  
  • 22. Value   •  Support for research grant preparation •  Multi-center support •  Online research portal for data management Interna&onal  Network  for  Simula&on-­‐based  Pediatric  Innova&on,  Research  and  Educa&on  
  • 23. ! INSPIRE INSPIRE INSPIRE INSPIRE INSPIRE INSPIRE INSPIRE INSPIRE INSPIRE PIRE INSPIRE INSPIRE INSPIRE INSPIRE INSPIRE INSPIRE INSPIRE INSPIRE INS INSPIRE INSPIRE INSPIRE INSPIRE International*Network*for*Simulation2based*Pediatric*Innovation,*Research*and*Education* INSPIRE INSPIRE INSPIRE INSPIRE INSPIRE IN INSPIRE Research Collaborative ! ! ! ! ! ! Manuscripts, Writing Groups and Authorship INSPIR E INSPIR Manuscript Oversight Committee (MOC) MOC Committee Members: Vinay Nadkarni (chair), Adam Cheng, Marc Auerbach, Betsy Hunt, David Kessler, Martin Pusic, Todd Chang, Jordan Duval-Arnould, Ralph McKinnon, Beth Mancini, Mary Patterson, Peter Weinstock, David Grant MOC Guiding Principles: The MOC will ensure that INSPIRE research projects are peer-reviewed for publication in a manner that ensures timely and effective communication of research findings to our stakeholders and that INSPIRE members are properly credited for their hard work. Additionally, the MOC will advocate for the involvement of young researchers in the publication process. 1. To be listed as an author an individual must significantly contribute to a published as described E SPIRE INSPIR by the International Committee of Medical Journal Editors criteria (www.icjme.org). Authors must meet ALL THREE of the following criteria: ! Substantial contribution(s) to conception and design, acquisition of data, or analysis and interpretation of data ! ! 2. Drafting the article or revising it critically for important intellectual content Final approval of the version to be published. Authorship and the order of authorship (first, second, third and last) will be assigned as early as possible in the research process. The first author will be responsible for leading the writing process as described below and delegating roles to co-authors. 3. Authorship and the order of authorship are subject to change if contributions to the final work product are not consistent with the expectations outlined by the lead author (ie. development and organization of protocol or tool, recruited many subjects, etc). Any research team member can contact the MOC for assistance in decisions related to authorship order and inclusion as an author. ! 1* !
  • 24. ! INSPIRE INSPIRE INSPIRE INSPIRE INSPIRE INSPIRE INSPIRE INSPIRE INSPIRE PIRE INSPIRE INSPIRE INSPIRE INSPIRE INSPIRE INSPIRE INSPIRE INSPIRE INS INSPIRE INSPIRE INSPIRE INSPIRE International*Network*for*Simulation2based*Pediatric*Innovation,*Research*and*Education* INSPIRE INSPIRE INSPIRE INSPIRE INSPIRE IN Writing Group Procedures ! ! ! ! ! This document describes the writing process, including roles, expectations, and procedures for writing ! papers related to studies conducted through INSPIRE. This writing process was developed to facilitate INSPIR E INSPIR the timely dissemination of research findings in the academic press, to reduce stress, and to increase communication among INSPIRE members. Key Roles in the Writing Process Primary Author: This person is responsible for the main writing task and is the corresponding author for the paper. Production Manager/Research Assistant: This person will manage the entire writing process. S/he is responsible for setting appropriate deadlines, maintaining progress, compiling sections written by others into a single draft, setting up a document template, and formatting the paper in accordance with the journal’s style. Core Writing Group: This group of 3-5 people is responsible for the content of the paper, including the main outcomes and messages reported there. They make decisions concerning the manuscript. If E SPIRE INSPIR conflict arises, this group must reach consensus. Steps in Writing Process: 1. The Writing Group identifies the main outcome of the paper. 2. The Primary Author writes a 200-300 word abstract and shares it with Writing Group 3. The Production Manager works with Primary Author to identify a timeline for the project and divide up writing tasks. If an author misses a deadline for the same product twice in a row then the Production Manager has the authority to reassign this work product and adjust that person’s authorship status. 4. All manuscripts must receive final approval of the INSPIRE MOC prior to submission 5. Primary author submits for publication 7. Once submitted, production manager is responsible for coordinating all replies to peer reviewers, though it is expected that the Primary Author will take the major responsibility in preparing these replies. Any secondary submission that requires re-analysis of data or re-interpretation of the primary findings of the paper should be done within 2 weeks of receipt of the comments. 8. Re-submissions are to be completed within 4 weeks of receipt of comments. ! 3* !
  • 25. Shared Expectations INSPIRE  WILL  PROVIDE   •  •  •  •  INVESTIGATOR  WILL  PROVIDE   Ongoing  review/mentorship   Feedback  of  the  study  protocol   LeIers  of  support   Access  to   •  Biannual  reviews  to  INSPIRE   •  Budget  line  of  0.1  FTE  or  greater   for  admin  support  in  all  grants   •  Acknowledgement     –  –  –  –  –  –  –  Collaborators/site  inves&gators   Research  experts   Online  portal   Logis&cal  support   Manuscript  oversight   Templates   Documents-­‐  scope  of  work,   wri&ng  groups,  IRB  templates,   data  use  agreements   –  Publica&ons   –  Posters   –  Presenta&ons   Interna&onal  Network  for  Simula&on-­‐based  Pediatric  Innova&on,  Research  and  Educa&on  
  • 26. Productivity   •  •  •  •  •  Publications- 20 Manuscripts in progress- 30 Abstracts/Presentations- 75 Grants- 25 Awards- 10 Interna&onal  Network  for  Simula&on-­‐based  Pediatric  Innova&on,  Research  and  Educa&on  
  • 27. Empirical Investigations Debriefing Assessment for Simulation in Healthcare Development and Psychometric Properties Marisa Brett-Fleegler, MD; Jenny Rudolph, PhD; Walter Eppich, MD, MEd; Michael Monuteaux, ScD; Eric Fleegler, MD, MPH; Adam Cheng, MD; Robert Simon, EdD Introduction: This study examined the reliability of the scores of an assessment instrument, the Debriefing Assessment for Simulation in Healthcare (DASH), in evaluating the quality of health care simulation debriefings. The secondary objective was to evaluate whether the instrument’s scores demonstrate evidence of validity. Methods: Two aspects of reliability were examined, interrater reliability and internal consistency. To assess interrater reliability, intraclass correlations were calculated for 114 simulation instructors enrolled in webinar training courses in the use of the DASH. The instructors reviewed a series of 3 standardized debriefing sessions. To assess internal consistency, Cronbach > was calculated for this cohort. Finally, 1 measure of validity was examined by comparing the scores across 3 debriefings of different quality. Results: Intraclass correlation coefficients for the individual elements were predominantly greater than 0.6. The overall intraclass correlation coefficient for the combined elements was 0.74. Cronbach > was 0.89 across the webinar raters. There were statistically significant differences among the ratings for the 3 standardized debriefings (P G 0.001). Conclusions: The DASH scores showed evidence of good reliability and preliminary evidence of validity. Additional work will be needed to assess the generalizability of the DASH based on the psychometrics of DASH data from other settings. (Sim Healthcare 00:00Y00, 2012) Key Words: Medical education, Health care education, Assessment, Debriefing, Simulation, Psychometrics, Behaviorally anchored rating scale. C hanges in graduate and postgraduate health care education over the past 2 decades bear witness to a paradigm shift toward competency-based medical education and the requiFrom the Division of Emergency Medicine (M.B.-F., M.M., E.F.), Children’s Hospital Boston; Harvard Medical School (M.B.-F., J.R., M.M., E.F., R.S.); Department of Anesthesia, Critical Care and Pain Medicine (J.R., R.S.), Massachusetts General Hospital, Boston; Center for Medical Simulation (J.R., R.S.), Cambridge, MA; Division of Emergency Medicine (W.E.), Ann and Robert H. Lurie Children’s Hospital of Chicago, Northwestern University Feinberg School of Medicine, Chicago, IL; and KidSIM-Aspire Simulation Research Program (A.C.), Division of Emergency Medicine, Alberta Children’s Hospital, University of Calgary, Calgary, AB, Canada. Reprints: Marisa Brett-Fleegler, MD, Division of Emergency Medicine, Children’s Hospital Boston, 300 Longwood Ave, Boston, MA 02115 (e-mail: marisa.brett@ childrens.harvard.edu). The Debriefing Assessment for Simulation in Healthcare (DASH) was developed by the Center for Medical Simulation (CMS) with no outside funding. The Examining Pediatric Resuscitation Education using Simulation and Scripting study was supported by a grant from the American Heart Association. To support reliability, DASH rater training is recommended by the developers although not required for DASH use or access to DASH documents. The CMS charges tuition for rater training sessions to defray the costs of the half-day training; this tuition yields no personal profit to authors J.R. and R.S. of the CMS. This training is 1 small component of the many educational activities of the CMS, which is a nonprofit, educational, charitable foundation. The DASH is copyrighted by the CMS, a nonprofit, educational, charitable foundation, which does not charge for the use of the DASH. The DASH handbook and DASH score sheets are available for free download from a publicly available Web site. The CMS asks DASH users to share DASH data with the CMS to help develop a database of how the DASH performs in a variety of contexts. The authors have no financial conflict of interest to declare. All authors have contributed substantially to the intellectual content of this study. Specifically, they have participated in the methodology and analysis and interpretation of data. All authors have participated in the crafting and revision of the article and are in agreement with its contents. Copyright * 2012 Society for Simulation in Healthcare DOI: 10.1097/SIH.0b013e3182620228 site accompanying expansion of formative and summative assessment processes and tools.1,2 Simultaneously, there has been exponential growth of simulation in health care education and research.3Y7 Simulation offers tremendous advantages to health care educators, including the opportunity to practice managing critical but infrequent events and the chance to practice procedures in a safe environment. Training programs around the world increasingly rely on simulation to prepare and assess clinical learners.8Y16 Whether for just-in-time practice for difficult cases at the point of care17,18 or for communication and teamwork-related training,19,20 simulation has tremendous support as evidenced by its widespread and expanding use. Beyond its uses in undergraduate and graduate training, simulation can be used to assess educational needs at the established practitioner level and to provide continuing health care education.21 The convergence of this educational shift and the expansion of health care simulation provide the opportunity to use simulation in support of competency-based education. A crucial ingredient when using simulation for technical or behavioral and teamwork skills is debriefing.22Y24 Debriefing is a facilitated conversation after such things as critical events and simulations in which participants analyze their actions, thought processes, emotional states, and other information to improve performance in future situations.25 Debriefing embodies 3 important aspects of the experiential nature of adult learning: reflection, feedback, and future experimentation.26,27 Reflecting on one’s own clinical or professional practice is a crucial step in the experiential Vol. 00, Number 00, Month 2012 Copyright © 2012 by the Society for Simulation in Healthcare. Unauthorized reproduction of this article is prohibited. 1
  • 28. ARTICLE ONLINE FIRST | COMPARATIVE EFFECTIVENESS RESEARCH Examining Pediatric Resuscitation Education Using Simulation and Scripted Debriefing A Multicenter Randomized Trial Adam Cheng, MD; Elizabeth A. Hunt, MD, MPH, PhD; Aaron Donoghue, MD; Kristen Nelson-McMillan, MD; Akira Nishisaki, MD; Judy LeFlore, PhD; Walter Eppich, MD, MEd; Mike Moyer, MS; Marisa Brett-Fleegler, MD; Monica Kleinman, MD; JoDee Anderson, MD; Mark Adler, MD; Matthew Braga, MD; Susanne Kost, MD; Glenn Stryjewski, MD; Steve Min, MD; John Podraza, MD; Joseph Lopreiato, MD, MPH; Melinda Fiedor Hamilton, MD; Kimberly Stone, MD, MS, MA; Jennifer Reid, MD; Jeffrey Hopkins, MSN, RN; Jennifer Manos, RN; Jonathan Duff, MD; Matthew Richard, BSc; Vinay M. Nadkarni, MD; for the EXPRESS Investigators Importance: Resuscitation training programs use simu- lation and debriefing as an educational modality with limited standardization of debriefing format and content. Our study attempted to address this issue by using a debriefing script to standardize debriefings. Objective: To determine whether use of a scripted de- briefing by novice instructors and/or simulator physical realism affects knowledge and performance in simulated cardiopulmonary arrests. Design: Prospective, randomized, factorial study design. Setting: The study was conducted from 2008 to 2011 at 14 Examining Pediatric Resuscitation Education Using Simulation and Scripted Debriefing (EXPRESS) network simulation programs. Interprofessional health care teams participated in 2 simulated cardiopulmonary arrests, before and after debriefing. Participants: We randomized 97 participants (23 teams) to nonscripted low-realism; 93 participants (22 teams) to scripted low-realism; 103 participants (23 teams) to nonscripted high-realism; and 94 participants (22 teams) to scripted high-realism groups. Intervention: Participants were randomized to 1 of 4 arms: permutations of scripted vs nonscripted debriefing and high-realism vs low-realism simulators. Main Outcomes and Measures: Percentage differ- ence (0%-100%) in multiple choice question (MCQ) test Author Affiliations are listed at the end of this article. Group Information: The Examining Pediatric Resuscitation Education Using Simulation and Scripted Debriefing (EXPRESS) investigators are listed at the end of this article. R (individual scores), Behavioral Assessment Tool (BAT) (team leader performance), and the Clinical Performance Tool (CPT) (team performance) scores postintervention vs preintervention comparison (PPC). Results: There was no significant difference at baseline in nonscripted vs scripted groups for MCQ (P=.87), BAT (P = .99), and CPT (P = .95) scores. Scripted debriefing showed greater improvement in knowledge (mean [95% CI] MCQ-PPC, 5.3% [4.1%-6.5%] vs 3.6% [2.3%4.7%]; P =.04) and team leader behavioral performance (median [interquartile range (IQR)] BAT-PPC, 16% [7.4%28.5%] vs 8% [0.2%-31.6%]; P = .03). Their improvement in clinical performance during simulated cardiopulmonary arrests was not significantly different (median [IQR] CPT-PPC, 7.9% [4.8%-15.1%] vs 6.7% [2.8%12.7%], P=.18). Level of physical realism of the simulator had no independent effect on these outcomes. Conclusions and Relevance: The use of a standardized script by novice instructors to facilitate team debriefings improves acquisition of knowledge and team leader behavioral performance during subsequent simulated cardiopulmonary arrests. Implementation of debriefing scripts in resuscitation courses may help to improve learning outcomes and standardize delivery of debriefing, particularly for novice instructors. JAMA Pediatr. Published online April 22, 2013. doi:10.1001/jamapediatrics.2013.1389 ESUSCITATION TRAINING PRO- grams, such as the American Heart Association Pediatric Advanced Life Support (PALS) course, use simulation as an educational modality.1-19 Debriefing following simulated or real resuscitations can improve the process and outcome of resuscitations.20,21 However, the most ef- JAMA PEDIATR PUBLISHED ONLINE APRIL 22, 2013 E1 fective manner in which to train novice instructors to debrief is untested. See related editorial Currently, PALS instructors complete a certification course, but the quality and style of instruction remain variable. Few instructors have prior simulation-based WWW. JAMAPEDS.COM ©2013 American Medical Association. All rights reserved. Downloaded From: http://archpedi.jamanetwork.com/ by a University of Calgary User on 04/30/2013 Author Aff of Calgary, Research Pr Emergency Departmen Alberta Chi Calgary, Alb Cheng); De Anesthesiol Care Medic Johns Hopk School of M Maryland ( Nelson-Mc Emergency Donoghue) Medicine (D Nishisaki, a Children’s H Philadelphi Pennsylvan Medicine, P of Nursing, Texas at Ar Division of Medicine, A Lurie Child Chicago, N University Medicine, C Eppich and Education a Services, Be Hospital, C Moyer); Ch Boston, Har School, Bos (Drs Brett-F Kleinman); Neonatolog Children’s H Health and (Dr Anders Critical Car Children’s H Dartmouth Hampshire of Emergen Nemours/A Hospital fo Medical Co Delaware (D Stryjewski) Pediatrics, National M Center, Uni University Sciences, B (Drs Min, P Lopreiato); Care Medic Hospital of Pittsburgh, Hamilton); Emergency Children’s H of Washing Medicine, S and Reid); Pediatrics, Center Dall Hopkins); D Emergency Cincinnati Center, Cin Manos); Di Care Medic Children’s H of Alberta,
  • 29. ORIGINAL ARTICLE Are Pediatric Interns Prepared to Perform Infant Lumbar Punctures? A Multi-Institutional Descriptive Study Marc Auerbach, MD, MSc,* Todd P. Chang, MD,Þ Jennifer Reid, MD,þ Casandra Quinones, MD,§ Amanda Krantz, BS,|| Amanda Pratt, MD,§ James Matthew Gerard, MD,¶ Renuka Mehta, MD,# Martin Pusic, MD, PhD,** and David Oren Kessler, MD, MSc** Background: There are few data describing pediatric interns’ experiences, knowledge, attitudes, and skills related to common procedures. This information would help guide supervisors’ decisions about interns’ preparedness and training needs. Objectives: This study aimed to describe pediatric interns’ medical school experiences, knowledge, attitudes, and skills with regard to infant lumbar punctures (LPs) and to describe the impact of these factors on interns’ infant LP skills. Methods: This prospective cross-sectional descriptive study was conducted at 21 academic medical centers participating during 2010. Participants answered 8 knowledge questions, 3 attitude questions, and 6 experience questions online. Skills were assessed on an infant LP simulator using a 15-item subcomponent checklist and a 4-point global assessment. Results: Eligible interns numbered 493, with 422 (86%) completing surveys and 362 (73%) completing skills assessments. The majority 287/ 422 (68%) had never performed an infant LP; however, 306 (73%) had observed an infant LP during school. The mean (SD) knowledge score was 63% (T21%). The mean (SD) subcomponent skills checklist score was 73% (T21%). On the global skills assessment, 225 (62%) interns were rated as beginner, and 137 (38%) were rated as competent, proficient, or expert. Independent predictors of an above-beginner simulator performance included infant LP experience on a patient (odds ratio [OR], 2.2; 95% confidence interval [CI], 1.4Y3.5), a knowledge score greater than 65% (OR, 2.4; 95% CI, 1.5Y3.7), or self-reported confidence (OR, 3.5; 95% CI, 1.9Y6.4). Conclusions: At the start of residency, the majority of pediatric interns have little experience, poor knowledge, and low confidence and are not prepared to perform infant LPs. Key Words: clinical skills, clinical competence/standards, competency-based education/methods, educational measurement/methods, education, medical, graduate/methods, infant, internship and residency/ methods, manikins, models, anatomic, pediatrics/education, practice (psychology), prospective studies, patient simulation, spinal puncture (Pediatr Emer Care 2013;29: 00Y00) From the *Yale University School of Medicine, New Haven, CT; †Children’s Hospital Los Angeles, Los Angeles, CA; ‡Seattle Children’s Hospital, Seattle, WA; §Robert Wood Johnson Medical School, University of Medicine and Dentistry of New Jersey, Piscataway, NJ; ||Tulane University, New Orleans, LA; ¶Cardinal Glennon Children’s Medical Center, Saint Louis University School of Medicine, St. Louis, MO; #Georgia Health Sciences University, Augusta GA; **Columbia University College of Physicians and Surgeons, New York, NY. Disclosure: The authors declare no conflict of interest. Reprints: Marc Auerbach, MD, MSc, Pediatrics Yale University School of Medicine, 100 York St, Suite 1F, New Haven, CT 06511 (e-mail: marc.auerbach@yale.edu). Dr Auerbach received grant funding from the RBaby Foundation and Yale Pediatric Faculty Scholars Fund to support this project. For the remaining authors, there are no grants to declare. Copyright * 2013 by Lippincott Williams & Wilkins ISSN: 0749-5161 Pediatric Emergency Care & Volume 29, Number 4, April 2013 P ediatric interns are expected to perform procedures shortly after beginning residency, yet few objective data have been published about new interns’ procedural experiences or skills.1Y4 Infant lumbar puncture (LP) is a common invasive procedure that pediatric interns perform.5 Up to 55% of interns’ infant LPs are unsuccessful.6 Competency in infant LP is not part of the Council on Medical Student Education in Pediatrics curriculum. Both the Accreditation Council for Graduate Medical Education and Association of American Medical Colleges require that pediatric residents learn this procedure.7Y9 The majority of graduating medical students have had little or no experience with the LP procedure.4,10Y12 Previous work has demonstrated that medical students have poor self-assessed proficiency and low comfort levels for most procedures.13,14 The majority of third-year medical students and a large percentage of fourth-year medical students report never having performed an LP.4,12 This has resulted in the concerning finding that many residents report performing their first LP without any previous training or experience with this procedure.10,11 This is not just a pedagogic issue but one of patient safety because some trainees report being unsupervised or supervised for only a portion of their first LP, despite their lack of previous experience.15 This study aims to (1) describe pediatric interns’ medical school experience, with knowledge of and confidence in infant LP; (2) describe pediatric interns’ infant LP skills via objective measures using simulation; and (3) analyze the association between procedural skills on a simulator and medical school experience, knowledge, and confidence. METHODS We conducted a prospective, descriptive study at 21 academic medical centers. We collected data between June 15, 2010, and August 31, 2010. Institutional review boards of each of the participating centers approved the study protocol, and all participants completed informed consent forms. Two of the initial 23 study sites were not able to complete the study protocol owing to issues with conducting the training during intern orientation. All entering postgraduate year 1 pediatric interns were eligible and recruited for participation in this study via e-mail and/or in person. The study consisted of 3 parts: (1) an online survey done at home by the participant (2) an 8-minute infant LP training audiovisual module viewed at home by the participant and (3) a simulator session carried out during intern orientation at the participant’s institution. For the survey, all eligible participants received an e-mail with a link to a 17-item online data collection instrument used in a previous study.16 This instrument consisted of 8 knowledge questions (multiple choice), 3 attitude questions (4-point Likert scale), and 6 training experience questions (numerical response). Participants completed this online instrument www.pec-online.com Copyright © 2013 Lippincott Williams & Wilkins. Unauthorized reproduction of this article is prohibited. 1
  • 30. Empirical Investigations Validation of Global Rating Scale and Checklist Instruments for the Infant Lumbar Puncture Procedure James M. Gerard, MD; David O. Kessler, MD, MSc; Colleen Braun, DO; Renuka Mehta, MD; Anthony J. Scalzo, MD; Marc Auerbach, MD, MSc Introduction: The Patient Outcomes in Simulation Education network has developed tools for the assessment of competency to perform the infant lumbar puncture (ILP) procedure. The objective of this study was to evaluate the validity and reliability of these tools in a simulated setting. Methods: We developed a 4-point anchored global rating scale (GRS) and 15-item dichotomous checklist instrument to assess ILP performance in a simulated environment. Video recordings of 60 subjects performing an unsupervised lumbar puncture on an infant bench top simulator were collected prospectively; 20 performed by subjects in each of 3 categories (beginner, intermediate experienced, or expert). Three blinded, expert raters independently scored each subject’s video recording using the GRS and checklist instruments. Results: The final version of the scoring instruments is presented. Across all subject groups, higher GRS scores were found with advancing level of experience (P G 0.01). Total checklist scores were similar between the expert and intermediate experienced groups (P = 0.54). Both groups scored higher than the beginner group on the checklist instrument (P G 0.01). For each rater, a significant positive correlation was found between GRS scores and total checklist scores (median Q = 0.75, P G 0.01). Cronbach > coefficient for the checklist was 0.77. The intraclass correlation coefficients between raters for the GRS and total checklist scores were 0.71 and 0.52, respectively. Conclusions: This study provides some initial evidence to support the validity and reliability of the ILP-anchored GRS. Acceptable internal consistency was found for the checklist instrument. The GRS instrument outperformed the checklist in its discriminant ability and interrater agreement. (Sim Healthcare 00:00Y00, 2013) Key Words: Infant lumbar puncture, Global rating scale, Checklist, Validity, Reliability T he Accreditation Council for Graduate Medical Education mandates that pediatric residents receive sufficient training to develop competency in 16 procedures including lumbar puncture (LP).1 In recent years, this task has become more challenging owing, in part, to increased restrictions on resident work hours with a resultant decrease in clinical exposure.2 To augment clinical-based training, a growing number of pediatric residency programs use simulation-based education for procedural skills training.3 Educational theory supports the teaching of psychomotor competence through simulated experiences,4,5 a preferred method given the current emphasis on error reduction and patient safety.6,7 Patient simulators are potentially useful training aids for teaching psychomotor skills.4,6,8 They can be used before and together with real patient care experiences to ensure safe and effective procedural skill development. Thus, From the Division of Pediatric Emergency Medicine (J.M.G., C.B., A.J.S.), Department of Pediatrics, Saint Louis University Health Sciences Center; and Saint Louis University School of Medicine Simulation Center (A.J.S.), Saint Louis, MO; Department of Pediatrics (D.O.K.), Columbia University Medical Center, New York, NY; Department of Pediatrics (R.M.), Georgia Health Sciences University, Augusta, GA; and Department of Pediatrics (M.A.), Yale University School of Medicine, New Haven, CT. Reprints: James M. Gerard, MD, Saint Louis University School of Medicine and SSM Cardinal Glennon Children’s Medical Center, 1465 South Grand Blvd, Saint Louis, MO 63104 (e-mail: gerardjm@slu.edu). The authors declare no conflict of interest. Presented in part at the Fourth International Pediatric Simulation Symposia and Workshops, October 26 to 27, 2011, Toulouse, France. Copyright * 2013 Society for Simulation in Healthcare DOI: 10.1097/SIH.0b013e3182802d34 simulation can mitigate the need for trainees to practice on patients during the period of skill acquisition. Objective assessment during procedural training requires reliable and valid scoring tools. Most simulation-based assessments use either a procedural skills checklist or global assessment instrument or a combination of the 2 techniques. Each assessment approach has strengths and weaknesses. Checklists produce easily reported summary scores that are a form of feedback familiar to students and faculty. High interrater reliability can be achieved by raters with a range of clinical expertise if checklists are well written, revised after pilot testing, and involve rater training.9 Checklists, however, are not ideal assessment tools for all situations. Not all specific learning objectives are easily converted to dichotomous or trichotomous choices used in most checklists. Checklists reward thoroughness without consideration of the timeliness of actions.10 Checklists targeting novices tend to be thorough, emphasizing the detailed steps that inexperienced providers need to take to provide safe and effective care. Given that expert clinicians are more apt to skip steps while maintaining high quality of care, this thoroughness could penalize experts unfairly for being more direct or efficient.11 Furthermore, ignoring the elements of speed, efficiency, and performance during stress or distractions may limit the ability of a checklist to discern nuances of expertise at the more experienced end of the spectrum. This ‘‘ceiling effect’’ is the inability of an assessment tool to identify more superior performances beyond Vol. 00, Number 00, Month 2013 Copyright © 2013 by the Society for Simulation in Healthcare. Unauthorized reproduction of this article is prohibited. 1
  • 31. ARTICLE Interns’ Success With Clinical Procedures in Infants After Simulation Training AUTHORS: David O. Kessler, MD, MSc, RDMS,a Grace Arteaga, MD,b Kevin Ching, MD,c Laura Haubner, MD,d Gunjan Kamdar, MD,e Amanda Krantz, MS,f Julie Lindower, MD,g Michael Miller, MD,h Matei Petrescu, MD,i Martin V. Pusic, MD,a Joshua Rocker, MD,h Nikhil Shah, MD,c Christopher Strother, MD,j Lindsey Tilt, MD,a Eric R. Weinberg, MD,c Todd P. Chang, MD,k Daniel M. Fein, MD,l and Marc Auerbach, MD, MSce aColumbia University College of Physicians and Surgeons, New York, New York; bMayo Clinic Children’s Hospital, Rochester, Minnesota; cWeill Cornell School of Medicine, New York, New York; dUniversity of South Florida College of Medicine, Tampa, Florida; eYale University School of Medicine, New Haven, Connecticut; fNew York University/Bellevue Hospital Center, New York, New York; gUniversity of Iowa Children’s Hospital, Iowa City, Iowa; hCohen Children’s Medical Center, New Hyde Park, New York; iTulane University School of Medicine, New Orleans, Louisiana; jMount Sinai School of Medicine, New York, New York; kChildren’s Hospital Los Angeles, Los Angeles, California; and lChildren’s Hospital at Montefiore, Bronx, New York KEY WORDS checklist, child, clinical skills, clinical competence/standards, competency-based education/methods, educational measurement/methods, education/medical/graduate methods, humans, infant, internship and residency/methods, manikins, models, anatomic, pediatrics/education, practice (psychology), prospective studies, outcome assessment (health care), patient simulation, randomized controlled trial, spinal puncture ABBREVIATIONS CIV—child intravenous line CSF—cerebrospinal fluid ILP—infant lumbar puncture IV—intravenous line LP—lumbar puncture SBME—simulation-based medical education Drs Kessler, Arteaga, Ching, Haubner, and Kamdar, Ms Krantz, Drs Lindower, Miller, Petrescu, Pusic, Rocker, Shah, Tilt, Weinberg, Chang, Fein, and Auerbach contributed substantially to the conception and design of this study; Drs Kessler, Arteaga, Ching, Haubner, Kamdar, Lindower, Miller, Petrescu, Pusic, Rocker, Shah, Strother, Tilt, Weinberg, Chang, and Auerbach contributed to the data acquisition and enrollment of study subjects; and Drs Kessler, Auerbach, and Pusic contributed to the analysis and interpretation of the data. All authors contributed to the drafting, editing, and preparation of the manuscript, and all authors approved of the final version of the manuscript and are responsible for the reported research. www.pediatrics.org/cgi/doi/10.1542/peds.2012-0607 doi:10.1542/peds.2012-0607 Accepted for publication Nov 19, 2012 (Continued on last page) PEDIATRICS Volume 131, Number 3, March 2013 WHAT’S KNOWN ON THIS SUBJECT: Pediatric training programs use simulation for procedural skills training. Research demonstrates student satisfaction with simulation training, improved confidence, and improved skills when retested on a simulator. Few studies, however, have investigated the clinical impact of simulation education. WHAT THIS STUDY ADDS: This is the first multicenter, randomized trial to evaluate the impact of simulation-based mastery learning on clinical procedural performance in pediatrics. A single simulation-based training session was not sufficient to improve interns’ clinical procedural performance. abstract BACKGROUND AND OBJECTIVE: Simulation-based medical education (SBME) is used to teach residents. However, few studies have evaluated its clinical impact. The goal of this study was to evaluate the impact of an SBME session on pediatric interns’ clinical procedural success. METHODS: This randomized trial was conducted at 10 academic medical centers. Interns were surveyed on infant lumbar puncture (ILP) and child intravenous line placement (CIV) knowledge and watched audiovisual expert modeling of both procedures. Participants were randomized to SBME mastery learning for ILP or CIV and for 6 succeeding months reported clinical performance for both procedures. ILP success was defined as obtaining a sample on the first attempt with ,1000 red blood cells per high-power field or fluid described as clear. CIV success was defined as placement of a functioning catheter on the first try. Each group served as the control group for the procedure for which they did not receive the intervention. RESULTS: Two-hundred interns participated (104 in the ILP group and 96 in the CIV group). Together, they reported 409 procedures. ILP success rates were 34% (31 of 91) for interns who received ILP mastery learning and 34% (25 of 73) for controls (difference: 0.2% [95% confidence interval: –0.1 to 0.1]). The CIV success rate was 54% (62 of 115) for interns who received CIV mastery learning compared with 50% (58 of 115) for controls (difference: 3% [95% confidence interval: –10 to 17]). CONCLUSIONS: Participation in a single SBME mastery learning session was insufficient to affect pediatric interns’ subsequent procedural success. Pediatrics 2013;131:e811–e820 Downloaded from pediatrics.aappublications.org at Yale University on March 27, 2013 e811
  • 32. Empirical Investigations Qualitative Evaluation of Just-in-Time Simulation-Based Learning The Learners’ Perspective Gunjan Kamdar, MD; David O. Kessler, MD, MSc; Lindsey Tilt, MD; Geetanjali Srivastava, MD, MPH; Kajal Khanna, MD; Todd P. Chang, MD; Dorene Balmer, PhD; Marc Auerbach, MD, MSc Introduction: Just-in-time training (JITT) is an educational strategy where training occurs in close temporal proximity to a clinical encounter. A multicenter study evaluated the impact of simulation-based JITT on interns’ infant lumbar puncture (LP) success rates. Concurrent with this multicenter study, we conducted a qualitative evaluation to describe learner perceptions of this modality of skills training. Methods: Eleven interns from a single institution participated in a face-to-face semistructured interview exploring their JITT experience. Interviews were audio-recorded and transcribed. Two investigators reviewed the transcripts, assigned codes to the data, and categorized the codes. Categories were modified by 4 emergency physicians. As a means of data triangulation, we performed focus groups at a second institution. Results: Benefits of JITT included review of anatomic landmarks, procedural rehearsal, and an opportunity to ask questions. These perceived benefits improved confidence with infant LP. Deficits of the training included lack of mannequin fidelity and unrealistic context when compared with an actual LP. An unexpected category, which emerged from our analysis, was that of barriers to JITT performance. Barriers included lack of time in a busy clinical setting and various instructor factors. The focus group findings confirmed and elaborated the benefits and deficits of JITT and the barriers to JITT performance. Conclusions: Just-in-time training improved procedural confidence with infant LP, but work place busyness and instructor lack of support or unawareness were barriers to JITT performance. Optimal LP JITT would occur with improved contextual fidelity. More research is needed to determine optimal training strategies that are effective for the learner and maximize clinical outcomes for the patient. (Sim Healthcare 8:43Y48, 2013) Key Words: Qualitative, Simulation, Learner’s perspective, Interviews, Focus groups, Survey, Simulation-based training, Just-in-time training, Deliberate practice, Infant lumbar puncture (LP) training, Simulation evaluation, Medical education, Procedural training, Pediatrics/education, Internship, Educational measurement. R educed patient contact for trainee learning owing to work hour limitations and increased supervision has resulted in decreased opportunities to experience rare events and procedures associated with critical illness.1Y4 Simulation technology and techniques have the potential to address this shortfall through provision of experiences that can increase From the Department of Pediatrics (Emergency Medicine) (G.K., M.A.), Yale University School of Medicine, New Haven, CT; Department of Pediatrics (Emergency Medicine) (D.O.K., L.T.), Center for Education Research and Evaluation (D.B.), Columbia University Medical Center, New York, NY; Department of Pediatrics (Emergency Medicine) (G.S.), University of Texas Southwestern Medical Center, Dallas, TX; Department of Emergency Medicine (K.K.), Stanford University, Standford, CA; and Department of Pediatrics (Emergency Medicine) (T.P.C.), Children’s Hospital Los Angeles, Los Angeles, CA. Reprints: Gunjan Kamdar, MD, Yale University School of Medicine, Department of Pediatric Emergency Medicine, 3303 Town Walk Dr, Hamden, CT 06518 (e-mail: gunjan.kamdar@yale.edu). The authors declare no conflict of interest. Funding of this original research and the infrastructure of the POISE network was supported by a grant from the nonprofit organization, R Baby Foundation. This grant funded transcription of all interviews and focus group incentives. Copyright * 2013 Society for Simulation in Healthcare DOI: 10.1097/SIH.0b013e31827861e8 skills without exposing patients to harm.1 Just-in-time training (JITT) is a strategy of work placeYbased training in simulation education that can maximize learning for the trainee without compromising patient safety. The shifting discourse in medical education asserts that education is learner centered and should be driven by learner needs.5 Learners, as active participants in their education, may provide unique insight into not only the intervention but also the context in which the learning occurs. Previous work has revealed that procedural workshops are usually highly regarded and endorsed by fellows, residents, and medical students6Y9 and tend to result in improved procedural confidence.10,11 To the best of our knowledge, studies to date looking at the learners’ perspectives of a specific simulation-based procedural training intervention in the work place have focused only on survey responses and analysis of free text. A prospective multicenter study conducted by the Patient Outcomes in Simulation Education (POISE) network quantitatively investigated the impact of an in situ simulationbased JITT intervention on the clinical performance of infant lumbar punctures (LPs). The results of this prospective Vol. 8, Number 1, February 2013 Copyright © 2013 by the Society for Simulation in Healthcare. Unauthorized reproduction of this article is prohibited. 43
  • 33. ARTICLE IN PRESS G Model RESUS-5545; No. of Pages 6 Resuscitation xxx (2013) xxx–xxx Contents lists available at SciVerse ScienceDirect Resuscitation journal homepage: www.elsevier.com/locate/resuscitation Clinical paper Neonatal intubation performance: Room for improvement in tertiary neonatal intensive care unitsଝ Laura Y. Haubner a,∗ , James S. Barry b,c , Lindsay C. Johnston d , Lamia Soghier e,f , Philip M. Tatum g,h , David Kessler i , Katheryne Downes a , Marc Auerbach d a University of South Florida Morsani College of Medicine, Tampa, FL, United States University of Colorado Hospital, Aurora, CO, United States University of Colorado School of Medicine, United States d Yale University School of Medicine, New Haven, CT, United States e Children’s Hospital at Montefiore, Bronx, NY, United States f Albert Einstein College of Medicine, United States g Children’s Hospital of Alabama, Birmingham, AL, United States h University of Alabama School of Medicine, United States i Columbia University School of Medicine, NY, NY, United States b c a r t i c l e i n f o Article history: Received 1 November 2012 Received in revised form 31 January 2013 Accepted 5 March 2013 Available online xxx Keywords: Intubation Resident education Neonates Graduate medical education Medical procedure Neonatal intensive care unit a b s t r a c t Objective: To describe neonatal tracheal intubation (TI) performance across five neonatal intensive care units. Methods: This prospective descriptive study was conducted at five level III neonatal intensive care units (NICU) between July 2010 and July 2011. TI performance data were collected using a standardized data collection instrument (provider, procedure, and patient characteristics) and analyzed using descriptive and inferential statistics. The primary outcome of interest was procedural success rate defined as a tube placed in the airway between the vocal cords that could be used to provide ventilation. Results: Forty-four percent of 455 TI attempts (203 patients) were successful. Attending physicians and 3rd year neonatal fellows had the highest success rates; 72.2% and 70%, respectively. Pediatric residents had the lowest success rate (20.3%). The median duration of attempts was 30 s for residents, 25 s for fellows, and 20 s for neonatal attending physicians. The most common reasons cited for failure were inability to visualize the vocal cords (25%), patient decompensation (desaturation/bradycardia, 41%) and esophageal TI (19%). The duration of all TI attempts ranged from 5 s to 180 s and there was no difference between successful and failed attempts. Impending respiratory failure (46.5%) was the most common indication for TI. Patient factors (weight, gestational age, or number of previous TI attempts) were not associated with TI success. Conclusions: Overall TI procedure success rates were poor. Providers with advanced training were more likely to be successful. Patient factors were not associated with TI success. © 2013 Elsevier Ireland Ltd. All rights reserved. 1. Introduction Tracheal intubation (TI) is a life-saving procedure for acutely ill infants. TI in neonates requires specialized equipment, knowledge and psychomotor skills. Neonatal TIs are low frequency high-stakes events. Sub-optimal performance of neonatal TI has ଝ A Spanish translated version of the summary of this article appears as Appendix in the final online version at http://dx.doi.org/10.1016/j.resuscitation.2013.03.014. ∗ Corresponding author at: USF Morsani College of Medicine, Division of Neona- tology, 1 Tampa General Circle, F170, Tampa, FL 33606, United States. E-mail addresses: lhaubner@health.usf.edu, laura.haubner@gmail.com (L.Y. Haubner). been associated with death and/or significant morbidity.1 Patient, provider, and procedure characteristics all contribute to TI performance. Deficient pediatric provider skills and inadequate training, such as improper laryngoscope handling, have been associated with multiple or prolonged TI attempts, physiologic deterioration, and soft tissue or airway injury.2,3 Inappropriate tube position (esophageal or right mainstem) has been associated with continued deterioration in patient’s cardiorespiratory status, pneumothorax, esophageal perforation and even death if not rapidly identified and corrected.1 TI success rates and provider performance are well described by adult airway researchers.4–8 This research has guided the development of standards for adult airway management, construction of predictive models of difficult airways, and identification of 0300-9572/$ – see front matter © 2013 Elsevier Ireland Ltd. All rights reserved. http://dx.doi.org/10.1016/j.resuscitation.2013.03.014 Please cite this article in press as: Haubner LY, et al. Neonatal intubation performance: Room for improvement in tertiary neonatal intensive care units. Resuscitation (2013), http://dx.doi.org/10.1016/j.resuscitation.2013.03.014
  • 34. CREATION AND DELPHI-METHOD REFINEMENT OF PEDIATRIC DISASTER TRIAGE SIMULATIONS Mark X. Cicero, MD, Linda Brown, MD, MSCE, Frank Overly, MD, Jorge Yarzebski, BS, NREMT-P, Garth Meckler, MD, MSHS, Susan Fuchs, MD, Anthony Tomassoni, MD, Richard Aghababian, MD, Sarita Chung, MD, Andrew Garrett, MD, Daniel Fagbuyi, MD, Kathleen Adelgais, MD, Ran Goldman, MD, James Parker, MD, Marc Auerbach, MD, MSci, Antonio Riera, MD, David Cone, MD, Carl R. Baum, MD Methods. We created mixed-methods disaster simulation scenarios with pediatric victims: a school shooting, a school bus crash, and a multiple-victim house fire. Standardized patients, high-fidelity manikins, and low-fidelity manikins were used to portray the victims. Each simulation had similar acuity of injuries and 10 victims. Examples include children with special health-care needs, gunshot wounds, and smoke inhalation. Checklist-based evaluation tools and behaviorally anchored global assessments of function were created for each simulation. Eight physicians and paramedics from areas with differing PDT strategies were recruited as Subject Matter Experts (SMEs) for a modified Delphi iterative critique of the simulations and evaluation tools. The modified Delphi was managed with an online survey tool. The SMEs provided an expected triage category for each patient. The target for modified Delphi consensus was ≥85%. Using Likert scales and free text, the SMEs assessed the validity of the simulations, including instances of bias toward a specific PDT strategy, clarity of learning objectives, and the correlation of the evaluation tools to the learning objectives and scenarios. Results. After two rounds of the modified Delphi, consensus for expected triage level was >85% for 28 of 30 victims, with the remaining two achieving >85% consensus after three Delphi iterations. To achieve consensus, we amended 11 instances of bias toward a specific PDT strategy and corrected 10 instances of noncorrelation between evaluations and simulation. Conclusions. The modified Delphi process, used to derive novel PDT simulation and evaluation tools, yielded a high degree of consensus among the SMEs, and eliminated biases toward specific PDT strategies in the evaluations. The simulations and evaluation tools may now be tested for reliability and validity as part of a prehospital PDT curriculum. Key words: disaster medicine education; paramedics; emergency medical technicians; simulation; pediatrics; triage Prehosp Emerg Care Downloaded from informahealthcare.com by Yale Dermatologic Surgery on 01/08/14 For personal use only. ABSTRACT Objective. There is a need for rigorously designed pediatric disaster triage (PDT) training simulations for paramedics. First, we sought to design three multiple patient incidents for EMS provider training simulations. Our second objective was to determine the appropriate interventions and triage level for each victim in each of the simulations and develop evaluation instruments for each simulation. The final objective was to ensure that each simulation and evaluation tool was free of bias toward any specific PDT strategy. Received May 21, 2013 from Yale School of Medicine, New Haven, Connecticut (MC, AT, MA, AR, DC, CRB), Departments of Pediatrics and Emergency Medicine, Hasbro Children’s Hospital, Alpert Medical School of Brown University, Providence, Rhode Island (LB, FO), Office of Continuing Medical Education, University of Massachusetts School of Medicine, Worcester, Massachusetts (JY, RA), Department of Pediatrics, BC Children’s Hospital/University of British Columbia, Vancouver, British Columbia (GM), Department of Pediatrics, Ann & Robert H. Lurie Children’s Hospital of Chicago, Northwestern University, Chicago, Illinois (SF), Division of Emergency Medicine, Boston Children’s Hospital, Harvard Medical School, Boston, Massachusetts (SC), Office of Preparedness and Emergency Operations, Office of the Assistant Secretary for Preparedness and Response, U.S. Department of Health and Human Services, Washington, DC (AG), Department of Emergency Medicine, Children’s National Medical Center, Washington, DC (DF), Department of Pediatrics, University of Colorado School of Medicine, Aurora, Colorado (KA), Pediatric Emergency Research Canada, Edmonton, Alberta (RG), and Department of Pediatrics, University of Connecticut School of Medicine, Hartford, Connecticut. Revision received September 17, 2013; accepted for publication September 18, 2013. PREHOSPITAL EMERGENCY CARE 2014;Early Online:1–8 This work was originally presented in abstract form at the Pediatric Academic Societies Meeting, Boston, Massachusetts, May 1, 2012. INTRODUCTION This work was supported by an Emergency Medical Services for Children Targeted Issues Grant, HRSA grant #H34MC19349. By definition, disasters overwhelm health-care resources.1 These events are unpredictable, varying in scale, duration, and number and types of victims. Emergency medical service (EMS) providers serve as the health-care system’s first line of response to multiple casualty events. Paramedics, emergency medical technicians, and emergency medical responders rapidly assess disaster victims, triage the patients, and provide life-saving treatment. Any contributions to the article by Dr. Garrett are the author’s own and do not necessarily reflect the view of the Department of Health and Human Services, or the United States government. The authors report no conflicts of interest. The authors alone are responsible for the content and writing of the paper. Address correspondence to Mark Cicero, MD, 100 York Street Suite 1F, New Haven, CT 06517, USA. e-mail: mark.cicero@yale.edu doi: 10.3109/10903127.2013.856505 1
  • 35. Challenges   •  Total of 45 projects presented to date –  Not all go to multi-site phase –  Maintain a steady stream of productivity from single and multicenter studies –  Support promotion of young investigators in academics •  Funding –  Identify and secure long term infrastructure funding to support future of the network •  Governance –  Build capacity for transition of leadership Interna&onal  Network  for  Simula&on-­‐based  Pediatric  Innova&on,  Research  and  Educa&on  
  • 36. Structure-­‐-­‐  Creativity   Interna&onal  Network  for  Simula&on-­‐based  Pediatric  Innova&on,  Research  and  Educa&on  
  • 37. Branding   Interna&onal  Network  for  Simula&on-­‐based  Pediatric  Innova&on,  Research  and  Educa&on  
  • 38. Collaborations   •  •  •  •  •  •  IMSH IPSSW PAS SIG APA EM SIG collaboration Boot camps APPD, ACGME, ABP, etc… Interna&onal  Network  for  Simula&on-­‐based  Pediatric  Innova&on,  Research  and  Educa&on  
  • 39. Thank  you   INSPIRESimula&onNetwork@gmail.com           hIp://www.INSPIRESim.com/   Interna&onal  Network  for  Simula&on-­‐based  Pediatric  Innova&on,  Research  and  Educa&on  
  • 40. INSPIRE @ IMSH 2014 Website Tour   Todd Chang January 25, 2014 San Francisco, California, USA Interna&onal  Network  for  Simula&on-­‐based  Pediatric  Innova&on,  Research  and  Educa&on  
  • 41. INSPIRE @ IMSH 2014   Simulation-Based Research Strategies for Success   Adam Cheng and David Kessler Interna&onal  Network  for  Simula&on-­‐based  Pediatric  Innova&on,  Research  and  Educa&on  
  • 42. What  we  know…   REVIEW Technology-Enhanced Simulation for Health Professions Education A Systematic Review and Meta-analysis David A. Cook, MD, MHPE Rose Hatala, MD, MSc Ryan Brydges, PhD Benjamin Zendejas, MD, MSc Jason H. Szostek, MD Amy T. Wang, MD Patricia J. Erwin, MLS Stanley J. Hamstra, PhD R Context Although technology-enhanced simulation has widespread appeal, its effectiveness remains uncertain. A comprehensive synthesis of evidence may inform the use of simulation in health professions education. Objective To summarize the outcomes of technology-enhanced simulation training for health professions learners in comparison with no intervention. Data Source Systematic search of MEDLINE, EMBASE, CINAHL, ERIC, PsychINFO, Scopus, key journals, and previous review bibliographies through May 2011. Study Selection Original research in any language evaluating simulation compared with no intervention for training practicing and student physicians, nurses, dentists, and other health care professionals. ESPONDING TO CHANGING practice environments requires new models for training health care professionals. Technology-enhanced simulation is one possible solution. We define technology broadly as materials and devices created or adapted to solve practical problems. Simulation technologies encompass diverse products including computerbased virtual reality simulators, highfidelity and static mannequins, plastic models, live animals, inert animal products, and human cadavers. Although technology-enhanced simulation has widespread appeal and many assert its educational utility, 1 such beliefs presently lack empirical support. Despite the large volume of research on simulation, its effectiveness remains uncertain in part because of the difficulty in interpreting research results one study at a time. Several systematic reviews2-5 and at least 2 meta-analyses6,7 have attempted to provide such syntheses, but each had limitations, including narrow inclusion criteria, incomplete Data Extraction Reviewers working in duplicate evaluated quality and abstracted information on learners, instructional design (curricular integration, distributing training over multiple days, feedback, mastery learning, and repetitive practice), and outcomes. We coded skills (performance in a test setting) separately for time, process, and product measures, and similarly classified patient care behaviors. Data Synthesis From a pool of 10 903 articles, we identified 609 eligible studies enrolling 35 226 trainees. Of these, 137 were randomized studies, 67 were nonrandomized studies with 2 or more groups, and 405 used a single-group pretest-posttest design. We pooled effect sizes using random effects. Heterogeneity was large (I2Ͼ50%) in all main analyses. In comparison with no intervention, pooled effect sizes were 1.20 (95% CI, 1.04-1.35) for knowledge outcomes (n = 118 studies), 1.14 (95% CI, 1.031.25) for time skills (n=210), 1.09 (95% CI, 1.03-1.16) for process skills (n=426), 1.18 (95% CI, 0.98-1.37) for product skills (n = 54), 0.79 (95% CI, 0.47-1.10) for time behaviors (n=20), 0.81 (95% CI, 0.66-0.96) for other behaviors (n=50), and 0.50 (95% CI, 0.34-0.66) for direct effects on patients (n = 32). Subgroup analyses revealed no consistent statistically significant interactions between simulation training and instructional design features or study quality. Conclusion In comparison with no intervention, technology-enhanced simulation training in health professions education is consistently associated with large effects for outcomes of knowledge, skills, and behaviors and moderate effects for patientrelated outcomes. www.jama.com JAMA. 2011;306(9):978-988 Author Affiliations: Office of Education Research, Mayo Medical School (Dr Cook), and Division of General Internal Medicine (Drs Cook, Szostek, and Wang), Department of Surgery (Dr Zendejas), and Mayo Libraries (Ms Erwin), Mayo Clinic College of Medicine, Rochester, Minnesota; Department of Medicine, University of British Columbia, Vancouver, Canada (Dr Hatala); Department of Medicine, University of 978 JAMA, September 7, 2011—Vol 306, No. 9 •  Rapid growth: education and research •  Simulation-based education is effective •  Quality of studies is highly variable Toronto, Toronto, Ontario, Canada (Dr Brydges); and Academy for Innovation in Medical Education, Faculty of Medicine, University of Ottawa, Ottawa, Ontario, Canada (Dr Hamstra). Corresponding Author: David A. Cook, MD, MHPE, Division of General Internal Medicine, Mayo Clinic College of Medicine, Baldwin 4-A, 200 First St SW, Rochester, MN 55905 (cook.david33@mayo.edu). ©2011 American Medical Association. All rights reserved. Interna&onal  Network  for  Simula&on-­‐based  Pediatric  Innova&on,  Research  and  Educa&on   Downloaded From: http://jama.jamanetwork.com/ on 12/07/2012
  • 43. What  we  know….   •  22.5% RCT’s •  11.5% multicenter studies •  5.3% reported patient and/or healthcare outcomes •  Pediatrics? Same story…. Interna&onal  Network  for  Simula&on-­‐based  Pediatric  Innova&on,  Research  and  Educa&on  
  • 44. Objectives   •  Describe the 2 different categories of simulation-based research •  Describe the benefits of simulation-based research •  Describe the various threats to the internal validity of simulation-based research studies, and identify associated mitigation strategies Interna&onal  Network  for  Simula&on-­‐based  Pediatric  Innova&on,  Research  and  Educa&on  
  • 45. Simulation  Research   Subject of Research Eg. Simulation Curriculum Environment for Research Eg. New technology Interna&onal  Network  for  Simula&on-­‐based  Pediatric  Innova&on,  Research  and  Educa&on  
  • 46. Simulation  as  the  Subject  of   Research   •  Research examining whether or not specific features of simulation experiences are educationally effective Interna&onal  Network  for  Simula&on-­‐based  Pediatric  Innova&on,  Research  and  Educa&on  
  • 47. Instructional  Design  Features   2012, e1–e32, Early Online WEB PAPER Comparative effectiveness of instructional design features in simulation-based education: Systematic review and meta-analysis DAVID A. COOK1,2, STANLEY J. HAMSTRA3, RYAN BRYDGES4, BENJAMIN ZENDEJAS2, JASON H. SZOSTEK2, AMY T. WANG2, PATRICIA J. ERWIN2 & ROSE HATALA5 1 Mayo Medical School, USA, 2Mayo Clinic College of Medicine, USA, 3University of Ottawa, Canada, 4University of Toronto, Canada, 5University of British Columbia, Canada Med Teach Downloaded from informahealthcare.com by University of Calgary on 12/07/12 For personal use only. Abstract Background: Although technology-enhanced simulation is increasingly used in health professions education, features of effective simulation-based instructional design remain uncertain. Aims: Evaluate the effectiveness of instructional design features through a systematic review of studies comparing different simulation-based interventions. Methods: We systematically searched MEDLINE, EMBASE, CINAHL, ERIC, PsycINFO, Scopus, key journals, and previous review bibliographies through May 2011. We included original research studies that compared one simulation intervention with another and involved health professions learners. Working in duplicate, we evaluated study quality and abstracted information on learners, outcomes, and instructional design features. We pooled results using random effects meta-analysis. Results: From a pool of 10 903 articles we identified 289 eligible studies enrolling 18 971 trainees, including 208 randomized trials. Inconsistency was usually large (I 2 4 50%). For skills outcomes, pooled effect sizes ( positive numbers favoring the instructional design feature) were 0.68 for range of difficulty (20 studies; p 5 0.001), 0.68 for repetitive practice (7 studies; p ¼ 0.06), 0.66 for distributed practice (6 studies; p ¼ 0.03), 0.65 for interactivity (89 studies; p 5 0.001), 0.62 for multiple learning strategies (70 studies; p 5 0.001), 0.52 for individualized learning (59 studies; p 5 0.001), 0.45 for mastery learning (3 studies; p ¼ 0.57), 0.44 for feedback (80 studies; p 5 0.001), 0.34 for longer time (23 studies; p ¼ 0.005), 0.20 for clinical variation (16 studies; p ¼ 0.24), and À0.22 for group training (8 studies; p ¼ 0.09). Conclusions: These results confirm quantitatively the effectiveness of several instructional design features in simulation-based education. Introduction Practice points Technology-enhanced simulation permits educators to create learner experiences that encourage learning in an environment that does not compromise patient safety. We define technology-enhanced simulation as an educational tool or device with which the learner physically interacts to mimic an aspect of clinical care for the purpose of teaching or assessment. Previous reviews have confirmed that technology-enhanced simulation, in comparison with no intervention, is associated with large positive effects (Cook et al. 2011; McGaghie et al. 2011). However, the relative merits of different simulation interventions remain unknown. Since the advantages of one simulator over another are context-specific (i.e. a given simulator may be more or less effective depending on the instructional objectives and educational context), it makes sense to focus on the instructional design features that define effective simulation training—the active ingredients or mechanisms. A comprehensive synthesis of evidence would be timely and useful to educators. . Evidence supports the following as best practices for simulation-based education: range of difficulty, repetitive practice, distributed practice, cognitive interactivity, multiple learning strategies, individualized learning, mastery learning, feedback, longer time, and clinical variation. . Future research should clarify the mechanisms of effective simulation-based education: what works, for whom, in what contexts? . Direct comparisons of alternate simulation-based education instructional designs can clarify these mechanisms. One systematic review identified 10 key features based on prevalence in the literature, but did not examine the impact of these features on educational outcomes (Issenberg et al., 2005). Other reviews have found an association between longer training time and improved outcomes Correspondence: David A. Cook, MD, MHPE, Division of General Internal Medicine, Mayo Clinic College of Medicine, Mayo 17, 200 First Street SW, Rochester, MN 55905, USA. Tel: 507-266-4156; fax: 507-284-5370; email: cook.david33@mayo.edu ISSN 0142–159X print/ISSN 1466–187X online/12/000001–32 ß 2012 Informa UK Ltd. DOI: 10.3109/0142159X.2012.714886 e1 •  •  •  •  •  •  •  Clinical Variation Cognitive Interactivity Curricular Integration Distributed Practice Feedback Group Practice Multiple Learning Strategies •  Repetitive Practice Interna&onal  Network  for  Simula&on-­‐based  Pediatric  Innova&on,  Research  and  Educa&on  
  • 48. Instructional  Design   How do simulation-based educational interventions need to be modified for the pediatric context? Interna&onal  Network  for  Simula&on-­‐based  Pediatric  Innova&on,  Research  and  Educa&on  
  • 49. Scripted  Debriefing  for  PALS   ARTICLE ONLINE FIRST | COMPARATIVE EFFECTIVENESS RESEARCH Examining Pediatric Resuscitation Education Using Simulation and Scripted Debriefing A Multicenter Randomized Trial Adam Cheng, MD; Elizabeth A. Hunt, MD, MPH, PhD; Aaron Donoghue, MD; Kristen Nelson-McMillan, MD; Akira Nishisaki, MD; Judy LeFlore, PhD; Walter Eppich, MD, MEd; Mike Moyer, MS; Marisa Brett-Fleegler, MD; Monica Kleinman, MD; JoDee Anderson, MD; Mark Adler, MD; Matthew Braga, MD; Susanne Kost, MD; Glenn Stryjewski, MD; Steve Min, MD; John Podraza, MD; Joseph Lopreiato, MD, MPH; Melinda Fiedor Hamilton, MD; Kimberly Stone, MD, MS, MA; Jennifer Reid, MD; Jeffrey Hopkins, MSN, RN; Jennifer Manos, RN; Jonathan Duff, MD; Matthew Richard, BSc; Vinay M. Nadkarni, MD; for the EXPRESS Investigators Importance: Resuscitation training programs use simu- lation and debriefing as an educational modality with limited standardization of debriefing format and content. Our study attempted to address this issue by using a debriefing script to standardize debriefings. Objective: To determine whether use of a scripted de- briefing by novice instructors and/or simulator physical realism affects knowledge and performance in simulated cardiopulmonary arrests. Design: Prospective, randomized, factorial study design. Setting: The study was conducted from 2008 to 2011 at 14 Examining Pediatric Resuscitation Education Using Simulation and Scripted Debriefing (EXPRESS) network simulation programs. Interprofessional health care teams participated in 2 simulated cardiopulmonary arrests, before and after debriefing. Participants: We randomized 97 participants (23 teams) to nonscripted low-realism; 93 participants (22 teams) to scripted low-realism; 103 participants (23 teams) to nonscripted high-realism; and 94 participants (22 teams) to scripted high-realism groups. Intervention: Participants were randomized to 1 of 4 arms: permutations of scripted vs nonscripted debriefing and high-realism vs low-realism simulators. Main Outcomes and Measures: Percentage differ- ence (0%-100%) in multiple choice question (MCQ) test Author Affiliations are listed at the end of this article. Group Information: The Examining Pediatric Resuscitation Education Using Simulation and Scripted Debriefing (EXPRESS) investigators are listed at the end of this article. R (individual scores), Behavioral Assessment Tool (BAT) (team leader performance), and the Clinical Performance Tool (CPT) (team performance) scores postintervention vs preintervention comparison (PPC). Results: There was no significant difference at baseline in nonscripted vs scripted groups for MCQ (P=.87), BAT (P = .99), and CPT (P = .95) scores. Scripted debriefing showed greater improvement in knowledge (mean [95% CI] MCQ-PPC, 5.3% [4.1%-6.5%] vs 3.6% [2.3%4.7%]; P = .04) and team leader behavioral performance (median [interquartile range (IQR)] BAT-PPC, 16% [7.4%28.5%] vs 8% [0.2%-31.6%]; P = .03). Their improvement in clinical performance during simulated cardiopulmonary arrests was not significantly different (median [IQR] CPT-PPC, 7.9% [4.8%-15.1%] vs 6.7% [2.8%12.7%], P = .18). Level of physical realism of the simulator had no independent effect on these outcomes. Conclusions and Relevance: The use of a standard- ized script by novice instructors to facilitate team debriefings improves acquisition of knowledge and team leader behavioral performance during subsequent simulated cardiopulmonary arrests. Implementation of debriefing scripts in resuscitation courses may help to improve learning outcomes and standardize delivery of debriefing, particularly for novice instructors. JAMA Pediatr. Published online April 22, 2013. doi:10.1001/jamapediatrics.2013.1389 ESUSCITATION TRAINING PRO- grams, such as the American Heart Association Pediatric Advanced Life Support (PALS) course, use simulation as an educational modality.1-19 Debriefing following simulated or real resuscitations can improve the process and outcome of resuscitations.20,21 However, the most ef- JAMA PEDIATR PUBLISHED ONLINE APRIL 22, 2013 E1 fective manner in which to train novice instructors to debrief is untested. See related editorial Currently, PALS instructors complete a certification course, but the quality and style of instruction remain variable. Few instructors have prior simulation-based WWW. JAMAPEDS.COM ©2013 American Medical Association. All rights reserved. Downloaded From: http://archpedi.jamanetwork.com/ by a University of Calgary User on 04/30/2013 Author Aff of Calgary, Research Pr Emergency Departmen Alberta Chi Calgary, Alb Cheng); De Anesthesiol Care Medic Johns Hopk School of M Maryland ( Nelson-Mc Emergency Donoghue) Medicine (D Nishisaki, a Children’s H Philadelphi Pennsylvan Medicine, P of Nursing, Texas at Ar Division of Medicine, A Lurie Child Chicago, N University Medicine, C Eppich and Education a Services, Be Hospital, C Moyer); Ch Boston, Har School, Bos (Drs Brett-F Kleinman); Neonatolog Children’s H Health and (Dr Anders Critical Car Children’s H Dartmouth Hampshire of Emergen Nemours/A Hospital fo Medical Co Delaware (D Stryjewski) Pediatrics, National M Center, Uni University Sciences, B (Drs Min, P Lopreiato); Care Medic Hospital of Pittsburgh, Hamilton); Emergency Children’s H of Washing Medicine, S and Reid); Pediatrics, Center Dall Hopkins); D Emergency Cincinnati Center, Cin Manos); Di Care Medic Children’s H of Alberta, Use of a debriefing script (after simulated resuscitation) improved knowledge and leadership skills Interna&onal  Network  for  Simula&on-­‐based  Pediatric  Innova&on,  Research  and  Educa&on  
  • 50. Distributed  Practice   Simulation-based mock codes significantly correlate with improved pediatric patient cardiopulmonary arrest survival rates Pamela Andreatta; Ernest Saxton; Maureen Thompson; Gail Annich Objective: To evaluate the viability and effectiveness of a simulation-based pediatric mock code program on patient outcomes, as well as residents’ confidence in performing resuscitations. A resident’s leadership ability is integral to accurate and efficient clinical response in the successful management of cardiopulmonary arrest (CPA). Direct experience is a contributing factor to a resident’s code team leadership ability; however, opportunities to gain experience are limited by relative infrequency of pediatric arrests and code occurrences when residents are on service. Methods: Clinicians responsible for pediatric resuscitations responded to mock codes randomly called at increasing rates over a 48-month period, just as they would an actual CPA event. Events were recorded and used for immediate debriefing facilitated by clinical faculty to provide residents feedback about their performance. Self-assessment data were collected from all team members. Hospital records for pediatric CPA survival rates were examined for the study duration. Results: Survival rates increased to approximately 50% (p ‫؍‬ .000), correlating with the increased number of mock codes (r ‫؍‬ T he ability to provide rapid resuscitation to a child in cardiopulmonary arrest (CPA) is critical for pediatricians at every level of experience. Most pediatricians receive their training in the management of CPA during residency rotations through neonatology, pediatric critical care, and pediatric emergency medicine (1, 2), where they may perform resuscitations and are required to complete Pediatric Advanced Life Support (PALS) training as part of their formal curriculum. In our teaching hospital setting, resuscitation is provided through the coordinated effort of multiple specialists From the Department of Medical Education (P.A.), Office of Clinical Affairs (E.S., M.T.), and the Department of Pediatric Medicine (G.A.), University of Michigan, Ann Arbor, MI. The authors have not disclosed any potential conflicts of interest. For information regarding this article, E-mail: pandreat@umich.edu, lanecind@umich.edu Copyright © 2011 by the Society of Critical Care Medicine and the World Federation of Pediatric Intensive and Critical Care Societies DOI: 10.1097/PCC.0b013e3181e89270 Pediatr Crit Care Med 2011 Vol. 12, No. 1 .87). These results are significantly above the average national pediatric CPA survival rates and held steady for 3 consecutive years, demonstrating the stability of the program’s outcomes. Conclusions: This study suggests that a simulation-based mock code program may significantly benefit pediatric patient CPA outcomes—applied clinical outcomes—not simply learner perceived value, increased confidence, or simulation-based outcomes. The use of mock codes as an integral part of residency programs could provide residents with the resuscitation training they require to become proficient in their practice. Future programs that incorporate transport scenarios, ambulatory care, and other outpatient settings could further benefit pediatric patients in prehospital contexts. (Pediatr Crit Care Med 2011; 12:000 – 000) KEY WORDS: simulation-based pediatric mock codes; pediatric cardiopulmonary arrest; residents’ resuscitation training; applied clinical outcomes; improved pediatric patient cardiopulmonary arrest survival rates performing emergency procedures under the direction of a senior resident, the code team leader. The ability of the code team leader is believed to be integral to accurate and efficient clinical response (3– 6). Although direct experience is a contributing factor to a resident’s leadership ability (3, 7, 8), opportunities for residents and pediatricians to gain this experience is limited by the relative infrequency of pediatric arrests in the clinical environment (9, 10) and whether or not a code occurs at a time when they are available to respond. The result is predominant reliance on PALS training to acquire and maintain code management competencies. Although effective for providing and sustaining a clinical foundation of conceptual knowledge (3, 11, 12), numerous studies (3, 5, 13–18) have demonstrated that clinical skills decline within several weeks if not applied. These studies suggested that PALS preparation is insufficient to provide residents with the confidence and abilities to perform pediatric resuscitations successfully. Not unexpectedly, physi- cian confidence to respond correctly to CPA is consistently lower than expected for proficient clinicians (6, 9, 14, 19). Several programs have demonstrated the effectiveness of mock code programs to improve physician confidence in responding to the need for pediatric resuscitation (9, 20 –22), and many have called for the inclusion of mock code programs as adjunct support to formal PALS training in pediatric residency programs (3, 9, 13, 14, 19, 20, 23, 24). Hunt et al (3) demonstrated that simulation-based methods in performing mock codes can be utilized to assess proficiencies in the clinical knowledge, skills, and attitudes in the area of pediatric resuscitation, as well as reveal specific aspects of clinical care and management that require remediation and improvement. Although these findings provide important evidence contributing to the value of mock codes in affecting the clinical care of pediatric patients requiring resuscitation, to date no evidence has demonstrated that the use of simulation-based mock codes significantly benefits patient outcomes for pediatric resuscitations. Randomly called mock codes over 48 months – increased survival rates (significantly above national average) for cardiac arrest 1 Interna&onal  Network  for  Simula&on-­‐based  Pediatric  Innova&on,  Research  and  Educa&on  
  • 51. Deliberate  Practice   A single SBME mastery learning session using an infant lumbar puncture task trainer was insufficient to affect pediatric interns’ procedural success Interna&onal  Network  for  Simula&on-­‐based  Pediatric  Innova&on,  Research  and  Educa&on  
  • 52. Simulation  as  the  Environment   for  Research   The simulated environment is used as an experimental model to study factors affecting human and systems performance in healthcare. Interna&onal  Network  for  Simula&on-­‐based  Pediatric  Innova&on,  Research  and  Educa&on  
  • 53. Performance  Shaping  Factors   Research Summit Article The Study of Factors Affecting Human and Systems Performance in Healthcare Using Simulation Vicki R. LeBlanc, PhD; Tanja Manser, PhD; Matthew B. Weinger, MD; David Musson, MD, PhD; Jared Kutzin, DNP, MPH, RN; A large body of research using simulation in healthcare has focused on simulation itself as an object of research. However, simulation can also be used in research on human or system performance. It can be used to investigate the effects of performance shaping factors that would otherwise be difficult to study in the actual clinical setting due to practical constraints or ethical concerns. In this monograph, we illustrate various ways in which simulation has been used to study performance shaping factors. We also discuss possible directions for future research as well as methodological considerations for researchers engaging in this approach to study performance shaping factors. (Sim Healthcare 6:S24 –S29, 2011) Steven K. Howard, MD Key Words: Simulation, Research, Performance shaping factors. T o date, the majority of simulation-based research in healthcare has focused on simulation itself as an object of research. This approach has primarily been driven by questions regarding the effectiveness of simulation modalities for the training or the assessment of health professionals and trainees. However, simulation can also be a valuable research modality to study the effects of multiple factors on the performance of humans or systems. The purpose of this monograph is to illustrate how simulation can be used in healthcare to rigorously study performance shaping factors (PSFs). In trying to understand the factors that shape human and system performance in healthcare, we are seeking to gain a deeper understanding of the PSFs that can enhance or degrade performance. PSFs are a wide range of attributes that have been shown to or are predicted to affect human performance in a task, job, or domain.1,2 The understanding of PSFs and their role in human and system performance is best served with the use of multiple complementary approaches, each one contributing a different per- From the Wilson Centre (V.R.L.), University of Toronto; Factor-Inwentash, Faculty of Social Work (V.R.L.), University of Toronto; Department of Medicine (V.R.L.), University of Toronto, Toronto, ON, Canada; Department of Psychology (T.M.), University of Fribourg, Fribourg, Switzerland; Center for Experiential Learning and Assessment (M.B.W.), and Center for Research and Innovation in Systems Safety (M.B.W.), Vanderbilt University, Nashville, TN; VA Tennessee Valley Healthcare System (M.B.W.), Nashville, TN; Centre for Simulation-Based Learning (D.M.), McMaster University, Hamilton, ON, Canada; Saint Barnabas Medical Center (J.K.), Livingston, NJ; VA Palo Alto Health Care System (S.K.H.), Palo Alto, CA; and Stanford University School of Medicine (S.K.H.), Stanford, CA. The authors declare no conflicts of interest. Reprints: Vicki R. LeBlanc, PhD, The Wilson Centre, 200 Elizabeth Street, 1ES-565, Toronto, ON, Canada M5G 2C4 (e-mail: vicki.leblanc@utoronto.ca). Copyright © 2011 Society for Simulation in Healthcare DOI: 10.1097/SIH.0b013e318229f5c8 S24 Simulation for the Study of Performance Shaping Factors spective of the picture.3– 6 Such complementary approaches can include retrospective analyses of incident reports (ie, reconstructive approach), prospective observations of routine or nonroutine patient care (ie, naturalistic approach),7–9 quasiexperimental interventions in actual patient care,10 prospective observations of the response to simulated care (ie, quasiexperimental approach),11 and objective data from artificial laboratory tasks (ie, experimental approach).12 Simulation provides an important approach to research into human or system performance because it can be used to investigate the effects of PSFs that would otherwise be difficult to study in the actual clinical setting due to practical constraints or ethical concerns. For example, it would be unethical to create conditions in which health professionals are caring for patients in a stressed or sleep-deprived condition simply for the sake of research. Naturalistic research, in which performance is observed when these factors occur naturally, can be valuable and informative but at times may be impractical given the challenges in predicting their occurrence or the presence of multiple confounding factors. As such, simulation presents an alternative methodology to study PSFs such as fatigue, stress, team composition, equipment characteristics, environmental features, and systemlevel characteristics. In this monograph, we illustrate various ways in which simulation has been used to study PSFs. We also discuss possible directions for future research as well as methodological considerations for researchers engaging in this approach to study PSFs. We define simulation broadly—from role-play to standardized patients, part task trainers, virtual reality simulators, and mannequin-based immersive simulations. That said, a majority of the research on PSFs in humansystem performance have been conducted in the more technical (hands-on) types of simulation including part-task trainers and mannequin-based simulation. •  •  •  •  Individuals Teams Environments Technological Factors •  Systems Factors •  Patient Factors Simulation in Healthcare Interna&onal  Network  for  Simula&on-­‐based  Pediatric  Innova&on,  Research  and  Educa&on  
  • 54. New  Environments   Empirical Investigations Simulation to Assess the Safety of New Healthcare Teams and New Facilities Gary L. Geis, MD; Brian Pio, BA, EMT-P; Tiffany L. Pendergrass, RN, BSN; Michael R. Moyer, MS; Mary D. Patterson, MD, MEd Introduction: Our institution recently opened a satellite hospital including a pediatric emergency department. The staffing model at this facility does not include residents or subspecialists, a substantial difference from our main hospital. Our previous work and published reports demonstrate that simulation can identify latent safety threats (LSTs) in both new and established settings. Using simulation, our objective was to define optimal staff roles, refine scope of practice, and identify LSTs before facility opening. Methods: Laboratory simulations were used to define roles and scope of practice. After each simulation, teams were debriefed using video recordings. The National Aeronautics and Space Administration-Task Load Index was completed by each participant to measure perceived workload. Simulations were scored for team behaviors by video reviewers using the Mayo High Performance Team Scale. Subsequent in situ simulations focused on identifying LSTs and monitoring for unintended consequences from changes made. Results: Twenty-four simulations were performed over 3 months before the hospital opening. Laboratory debriefing identified the need to modify provider responsibilities. National Aeronautics and Space Administration-Task Load Index scores and debriefings demonstrated that the medication nurse had the greatest workload during resuscitations. Modifying medication delivery was deemed critical. Lower Mayo High Performance Team Scale scores, implying less teamwork, were noted during in situ simulations. In situ sessions identified 37 LSTs involving equipment, personnel, and resources. Conclusions: Simulation can help determine provider workload, refine team responsibilities, and identify LSTs. This pilot project provides a template for evaluation of new teams and clinical settings before patient exposure. (Sim Healthcare 6:125–133, 2011) Key Words: Emergency, Safety, Simulation, Teamwork, Workload. O ur institution recently opened a satellite emergency department (SED) staffed by teams that include nurses, respiratory therapists, paramedics, and pediatric emergency physicians. No residents, fellows, or subspecialists are available in this facility, a major difference compared with our main hospital academic emergency department (ED). In addition, at the SED, only one emergency medicine-trained physician is present at any time. This mandates a different team model (one physician, fewer nurses, and no pharmacist) in the SED resuscitation bay compared with the main ED. The importance of developing optimal health care teams cannot be overstated. The Institute of Medicine, To Err is Human, stated “Most care delivered today is done by teams of people, yet training often remains focused on individual responsibilities leaving practitioners inadequately prepared to enter complex settings.”1 Qualitative human factors methods have From the Division of Emergency Medicine (G.L.G., M.D.P.), Center for Simulation and Research, Cincinnati Children’s Hospital Medical Center, Cincinnati, OH; and Center for Simulation and Research (B.P., T.L.P., M.R.M.), Cincinnati Children’s Hospital Medical Center, Cincinnati, OH. M.D.P. was President of the Society for Simulation in Heathcare at the time of manuscript submission. Reprints: Gary L. Geis, MD, Division of Emergency Medicine, Center for Simulation and Research, Cincinnati Children’s Hospital Medical Center, 3333 Burnet Avenue, ML 12000, Cincinnati, OH 45229-3039 (e-mail: gary.geis@cchmc.org). Copyright © 2011 Society for Simulation in Healthcare DOI: 10.1097/SIH.0b013e31820dff30 Vol. 6, No. 3, June 2011 been effective in evaluation of technical and nontechnical skills of medical care teams. Moorthy et al2 used human factors methods to evaluate nontechnical skills among surgical (physician) trainees within formed surgical teams, including piloting the use of a nontechnical skills assessment scale. The authors showed no differences between trainees at different experience levels except in leadership; however, they did not assess the nonphysicians nor did they attempt to design and assess a new team structure, which we hoped to perform in this project. Providers in the SED practice in an environment that differs in physical arrangement, has fewer resources, and is both a receiving facility for ambulances and a transporting facility to definitive care. In addition, the satellite facility has a low-acuity observation unit where pediatric patients are admitted if their management is expected to require Ͻ23 hours of care. A hospitalist manages these children; however, as he/she is not always in house, patients admitted to the observation unit who acutely worsen and require resuscitation are brought to the SED. This again is substantially different than the main hospital. A specific concern in a new facility is the existence of unrecognized or latent threats to safety that could affect actual patients once the facility opens, such as missing equipment, inefficient setup, or insufficient space for procedures.3 This concern was significant, due to the new team structure and differences in setting described above. Latent safety threats (LSTs) have been defined as system-based threats to Simulation was used to help determine provider workload, refine team responsibilities, and identify LSTs before a new hospital facility was opened with real patients 125 Interna&onal  Network  for  Simula&on-­‐based  Pediatric  Innova&on,  Research  and  Educa&on  
  • 55. Intubation  –  Difficult  Airway   ORIGINAL ARTICLE A Randomized Comparison of the GlideScope Videolaryngoscope to the Standard Laryngoscopy for Intubation by Pediatric Residents in Simulated Easy and Difficult Infant Airway Scenarios Miguel Fonte, MD,* Ignacio Oulego-Erroz, MD,* Lindsay Nadkarni,Þ Luis Sanchez-Santos, MD,þ ´ Antonio Iglesias-Vasquez, MD, PhD,§ and Antonio Rodrıguez-Nunez, MD, PhD* ´ ´ ´˜ Background: Videolaryngoscopy has been developed mainly to assist difficult airway intubation. However, there is a lack of studies demonstrating the real efficacy of its use in children. In this study, we tested the hypothesis that GlideScope (Verathon Inc, Bothell, Wash) videolaryngoscope improves tracheal intubation when used by pediatric residents in an advanced patient simulation model. Methods: Pediatric residents who passed a pediatric advanced life support course were eligible for the study. An advanced infant simulator was used, and 4 scenarios were proposed: normal airway (NA), tongue edema (TE), tongue edema and oropharyngeal edema, and cervical collar. No participant had prior experience with any videolaryngoscope. After a brief instruction in GlideScope technique, each participant performed the 4 scenarios using both the standard Miller and GlideScope laryngoscopes, in a random sequence. Results: Sixteen residents were included. The number of failed intubations was higher with GlideScope in NA and TE scenarios (3 vs 0, in both cases). Mean (SD) time to successful intubation was significantly longer with GlideScope in the NA scenario (GlideScope, 38 [SD, 13] vs Miller, 26 [SD, 16] seconds; P = 0.043). The number of maneuvers was significantly higher with GlideScope in the tongue edema and oropharyngeal edema scenario (2.3 [SD, 1.5] vs 1.5 [SD, 1]; P = 0.04). Upper jaw injury index was significantly lower with GlideScope in NA (2.0 [SD, 1] vs 2.6 [SD, 0.8]; P = 0.008) and cervical collar (2.1 [SD, 1.0] vs 2.8 [SD, 0.5]; P = 0.011) scenarios. Participants considered GlideScope technique more difficult than standard Miller in NA (5 [SD, 2.0] vs 3 [SD, 1.3]; P = 0.04) and TE (5.9 [SD, 2.5] vs 3.9 [SD, 1.7]; P = 0.02) scenarios. Conclusions: In simulated scenarios of infant NA and difficult airway, when used by pediatric residents, GlideScope did not improve intubation performance when compared with the standard laryngoscope. Nevertheless, GlideScope may be safer for upper jaw injury and could have advantages in the management of complicated airway. Further studies are needed to assess if specific training will improve GlideScope intubation performance and whether the ‘‘in simulator’’ results translate into clinical practice. Key Words: tracheal intubation, videolaryngoscopy, difficult airway, simulation, residents, patient safety From the *Pediatric Emergency and Critical Care Division, Hospital Clınico ´ Universitario de Santiago, Santiago de Compostela, Spain; †University of Pennsylvania, Philadelphia, PA; ‡Arzua’s Primary Care Center, Galicia’s ´ Health Service, Spanish Society of Primary Care Pediatrics, Madrid; and §Public Health Foundation Emerxencias Sanitarias 061 de Galicia, Santiago de Compostela, Spain. Reprints: Miguel Fonte, MD, Hospital Clınico Universitario de Santiago ´ de Compostela, UCI Pediatrica, Travesıa da Choupana, s/n, 15706 ´ ´ Santiago de Compostela, Spain (e-mail: miguelfonte@gmail.com). No potential conflict of interest was declared by any of the authors. Copyright * 2011 by Lippincott Williams & Wilkins ISSN: 0749-5161 398 www.pec-online.com (Pediatr Emer Care 2011;27: 398Y402) L aryngoscopy and tracheal intubation (TI) are crucial procedures for airway management in children and demand excellent skills from pediatricians who are responsible for treating emergencies and critically ill patients. Failed TI is an important cause of morbidity, arising from direct airway trauma and the systemic complications of hypoxia.1 Pediatric residents should train for assessing and treating potentially seriously ill children with efficacy and safety. Unfortunately, surveys indicate that even in high-volume hospitals, residents are exposed to a very low number of critically ill children.2 Pediatric emergency medical directors judge the number of TI opportunities to be inadequate to maintain this technique competency among their physicians,3 and many residents respond inadequately to resuscitation maneuvers.4 The recent development of a number of indirect videolaryngoscopes that do not require alignment of the oral, pharyngeal, and tracheal axes may facilitate TI in patients with abnormal upper airway anatomy.1,5,6 The GlideScope (Verathon Inc, Bothell, Wash) videolaryngoscope has been shown in adults to provide equal or superior views of the larynx when compared with direct laryngoscopy by both experienced and novice operators, in standard and difficult intubation scenarios (Fig. 1).7Y9 Pediatric patients might benefit in particular from nontraumatic, fast, and single-attempt TI. Since the introduction of GlideScope blades for pediatric and neonatal patients in recent years, the few data available show promising results both in neonates10 and children,11,12 especially with difficult airways.13Y16 However, these data have been obtained in the controlled environment of the operating room, and an assessment of these laryngoscopes in critically ill children is not yet reported. Although the opportunities to train TI are rare and/or unpredictable and the procedure could expose the patient to a disproportionate risk, training TI on manikins and simulated scenarios is an appropriate strategy from both a clinical and ethical point of view.1,17Y21 Until now, some studies have compared GlideScope with standard curve Macintosh blade TI in manikins but exclusively in adult models.1,22,23 Our objective was to compare the ability of pediatric residents to perform a TI in an infant advanced simulator with NA and simulated difficult airway scenarios, using the standard right blade Miller laryngoscope and the novel GlideScope. METHODS Participants Pediatrics residents who passed a pediatric advanced life support course (that includes a specific skill station on TI with Pediatric Emergency Care & For pediatric residents managing a difficult airway, use of a glidescope did not improve airway management compared to a traditional laryngoscope Volume 27, Number 5, May 2011 Copyright © 2011 Lippincott Williams & Wilkins. Unauthorized reproduction of this article is prohibited. Interna&onal  Network  for  Simula&on-­‐based  Pediatric  Innova&on,  Research  and  Educa&on  
  • 56. Simulation  Research:  Pros  and   Cons   Interna&onal  Network  for  Simula&on-­‐based  Pediatric  Innova&on,  Research  and  Educa&on  
  • 57. Advantages   •  •  •  •  •  Standardization of clinical context On demand clinical presentations Recruitment of subjects No risk for patient harm No concerns re: protected health information •  Easy to collaborate Interna&onal  Network  for  Simula&on-­‐based  Pediatric  Innova&on,  Research  and  Educa&on  
  • 58. Disadvantages   •  •  •  •  •  Authenticity Cost Outcomes Lack of best practices Funding difficult Interna&onal  Network  for  Simula&on-­‐based  Pediatric  Innova&on,  Research  and  Educa&on  
  • 59. Verdict?   Interna&onal  Network  for  Simula&on-­‐based  Pediatric  Innova&on,  Research  and  Educa&on  
  • 60. Threats  to  Internal  Validity   Interna&onal  Network  for  Simula&on-­‐based  Pediatric  Innova&on,  Research  and  Educa&on  
  • 61. What  is  Internal  Validity?   “Internal validity is a property of scientific studies which reflects the extent to which a causal conclusion based on a study is warranted. Such warrant is constituted by the extent to which a study minimized systematic error or bias.” Interna&onal  Network  for  Simula&on-­‐based  Pediatric  Innova&on,  Research  and  Educa&on  
  • 62. Mitigation  Strategies   Interna&onal  Network  for  Simula&on-­‐based  Pediatric  Innova&on,  Research  and  Educa&on  
  • 63. Simulator  Selection   •  Ensure simulator has desired functionality •  Use the same simulator for all sessions Interna&onal  Network  for  Simula&on-­‐based  Pediatric  Innova&on,  Research  and  Educa&on  
  • 64. Scenario  Design   •  •  •  •  Pre-set or limit the duration of the scenario Set transitions Standardize verbal, audio and/or visual cues Pilot run scenarios / train research facilitators Interna&onal  Network  for  Simula&on-­‐based  Pediatric  Innova&on,  Research  and  Educa&on  
  • 65. Confederates   •  Careful scripting of confederate roles •  Confederate cue cards •  Train confederates (eg. training video, pilot run) Interna&onal  Network  for  Simula&on-­‐based  Pediatric  Innova&on,  Research  and  Educa&on  
  • 66. Realism   •  Standardize the environment – equipment, location, human resources, size of room, noise level et •  Orient participants to the simulator and simulated environment Interna&onal  Network  for  Simula&on-­‐based  Pediatric  Innova&on,  Research  and  Educa&on  
  • 67. Debriefing  /  Feedback   •  Who – debriefer characteristics •  What – method of debriefing / content of debriefing •  When – timing of debriefing •  Where – environment for debriefing •  Why – debriefing theory Interna&onal  Network  for  Simula&on-­‐based  Pediatric  Innova&on,  Research  and  Educa&on  
  • 68. Video  Capture  /  Review   •  Ideal video angle •  Number of views required to capture behaviors of interest •  Monitor displaying vital signs •  Ensure adequate audio capture •  Pilot run Interna&onal  Network  for  Simula&on-­‐based  Pediatric  Innova&on,  Research  and  Educa&on  
  • 69. Outcome  Measures   Interna&onal  Network  for  Simula&on-­‐based  Pediatric  Innova&on,  Research  and  Educa&on  
  • 70. Questions?   Interna&onal  Network  for  Simula&on-­‐based  Pediatric  Innova&on,  Research  and  Educa&on  
  • 71. INSPIRE @ IMSH 2014 Instructional Design   Mark Adler January 25, 2014 San Francisco, California, USA Interna&onal  Network  for  Simula&on-­‐based  Pediatric  Innova&on,  Research  and  Educa&on  
  • 72. INSPIRE @ IMSH 2014 Project Wrap-ups   January 25, 2014 San Francisco, California, USA Interna&onal  Network  for  Simula&on-­‐based  Pediatric  Innova&on,  Research  and  Educa&on  
  • 73. In  the  next  year  we  will…   •  ALERT New Projects I –  Cheng, Mallory, Flannery, Moro-Sutherland, Mullan •  ALERT New Projects II –  Weiner, Ambati, Pennaforte, Ruscica, Burkhard •  ALERT 2013 Project Updates –  Johnston, Chang, Dadiz, Auerbach, MacKinnon, Cheng, Cheng, Pusic, Lemke, Kessler, Hundalani, White Interna&onal  Network  for  Simula&on-­‐based  Pediatric  Innova&on,  Research  and  Educa&on  
  • 74. In  the  next  year  we  will…   •  What did you achieve today? •  Gaps/questions •  Timeline/Next steps •  2 weeks •  2 months •  6 months •  1 year Interna&onal  Network  for  Simula&on-­‐based  Pediatric  Innova&on,  Research  and  Educa&on  
  • 75. INSPIRE @ IMSH 2014 Open Group Meeting   Nadkarni/MacKinnon January 25, 2014 San Francisco, California, USA Interna&onal  Network  for  Simula&on-­‐based  Pediatric  Innova&on,  Research  and  Educa&on  
  • 76. •  Feedback on day •  Network Goals •  Next steps •  IPSSW Vienna •  PAS SIG Vancouver •  IMSH 2015 Interna&onal  Network  for  Simula&on-­‐based  Pediatric  Innova&on,  Research  and  Educa&on  

×