The document summarizes an assessment given to nurses who attended a patient safety training. It provides details on the assessment framework, participants, data collection tool, results, and measurement characteristics. The assessment was criterion-referenced and measured declarative knowledge through a 12-item true/false post-test. Cronbach's alpha was 0.86, indicating very good internal consistency. The mean score was 91.2% with a standard error of measurement of 6.4%. Validity was ensured by testing understanding of concepts from the training.
Innovative Strategies For Successful Trial Design - Webinar SlidesnQuery
Full webinar available here: https://www.statsols.com/webinar/innovative-strategies-for-successful-trial-design
[Webinar] Innovative Strategies For Successful Trial Design- In this free webinar, you will learn about:
- The challenges facing your trials
- How to calculate the correct sample size
- Worked examples including Mixed/Hierarchical Models
- Posterior Error
- Adaptive Designs For Survival
www.statsols.com
This document discusses quality assurance in physician office laboratories. It provides guidance on developing a quality assurance program that meets CLIA requirements. The summary is:
The document outlines key areas to assess in a quality assurance program, including the relationship between patient information and test results, personnel competency, communication processes, complaint handling, staff training, and record keeping. It provides examples of how to evaluate each area and ensure corrective actions are taken when issues arise. The goal of a quality assurance program is to continuously monitor and improve all aspects of the total testing process to provide quality patient care.
This document discusses a quality improvement project aimed at reducing emergency room wait times. A team of 3 nurses will lead the project. They plan to research current best practices for minimizing wait times and improving the patient experience in the ER. Options may include adjustments to staffing, facility layout, or patient flow. The team will evaluate several proposals before testing a new approach. Their goals are to enhance patient satisfaction, safety, and hospital reimbursement by addressing long wait times in the ER.
Sample Size: A couple more hints to handle it right using SAS and RDave Vanz
Andrii Artemchuk from Intego Group, a Ukrainian offshore staffing company, presented this power point to the audience at a phUSE conference in Frankfurt Germany in 2018 on SAS and R
Evaluating Change and Tracking ImprovementJane Chiang
This document summarizes the evaluation of innovation units at a hospital. It describes the evaluation process, data collected, and key findings. An evaluation steering committee oversees the evaluation in 90-day cycles. Data is collected through surveys, interviews, and observations. Findings show positive feedback from patients and staff regarding relationship-based care practices. Opportunities are identified in areas like documenting discharge dates and care team members. Next steps include continuing the evaluation, expanding to more units, and deepening analysis of specific measures to further optimize the innovation units.
The document discusses quantitative synthesis and meta-analysis methods. It defines key terms like effect measures, heterogeneity, and fixed and random effects models. It also covers combining data across studies, including calculating weighted averages and addressing issues like Simpson's paradox and heterogeneity that can impact meta-analyses. Worked examples are provided for binary outcomes, risk ratios, and calculating treatment effects from studies.
Use of the Crowdsourcing Methodology to Generate a Problem-Laboratory Test Kn...Allison McCoy
We evaluated the use of a previously described crowdsourcing methodology to generate a problem-laboratory test knowledge base, identifying appropriately linked problem-laboratory test pairs by clinicians during e-ordering. Existing evaluation metrics, including patient frequency and link ratio, were not correlated with appropriateness for 600 links manually validated. Further research is necessary to better evaluate these associations.
Enhancing Code Blue Performance with xAPIWatershed
Providing care to more than 500,000 patients each year, MedStar Health is the largest healthcare provider in the Washington, D.C./Maryland region. As an organization, they’re committed to providing the best care during emergency situations in which patients are in cardiopulmonary arrest (referred to as “Code Blues”).
During a Code Blue, the stakes are literally life and death—which is why it’s vital that MedStar resuscitation team members are well trained. Speed is vital in the seconds and minutes that follow a Code Blue, including the amounts of time for performing chest compressions and defibrillation and administering medications to a patient.
As a result, MedStar’s Code Blue training and learning resources have focused on improving clinician performance to reduce these times. However, MedStar didn’t have extensive information on the effectiveness of various training programs and learning resources.
Using Watershed and xAPI to aggregate and visualize data from multiple data sources, MedStar is now able to answer a range of questions about the usage and effectiveness of their training systems. They also have a better understanding of where they need to target their attention to improve performance. In particular, they can test the “chain of cause and effect” from training to simulations to final results.
Innovative Strategies For Successful Trial Design - Webinar SlidesnQuery
Full webinar available here: https://www.statsols.com/webinar/innovative-strategies-for-successful-trial-design
[Webinar] Innovative Strategies For Successful Trial Design- In this free webinar, you will learn about:
- The challenges facing your trials
- How to calculate the correct sample size
- Worked examples including Mixed/Hierarchical Models
- Posterior Error
- Adaptive Designs For Survival
www.statsols.com
This document discusses quality assurance in physician office laboratories. It provides guidance on developing a quality assurance program that meets CLIA requirements. The summary is:
The document outlines key areas to assess in a quality assurance program, including the relationship between patient information and test results, personnel competency, communication processes, complaint handling, staff training, and record keeping. It provides examples of how to evaluate each area and ensure corrective actions are taken when issues arise. The goal of a quality assurance program is to continuously monitor and improve all aspects of the total testing process to provide quality patient care.
This document discusses a quality improvement project aimed at reducing emergency room wait times. A team of 3 nurses will lead the project. They plan to research current best practices for minimizing wait times and improving the patient experience in the ER. Options may include adjustments to staffing, facility layout, or patient flow. The team will evaluate several proposals before testing a new approach. Their goals are to enhance patient satisfaction, safety, and hospital reimbursement by addressing long wait times in the ER.
Sample Size: A couple more hints to handle it right using SAS and RDave Vanz
Andrii Artemchuk from Intego Group, a Ukrainian offshore staffing company, presented this power point to the audience at a phUSE conference in Frankfurt Germany in 2018 on SAS and R
Evaluating Change and Tracking ImprovementJane Chiang
This document summarizes the evaluation of innovation units at a hospital. It describes the evaluation process, data collected, and key findings. An evaluation steering committee oversees the evaluation in 90-day cycles. Data is collected through surveys, interviews, and observations. Findings show positive feedback from patients and staff regarding relationship-based care practices. Opportunities are identified in areas like documenting discharge dates and care team members. Next steps include continuing the evaluation, expanding to more units, and deepening analysis of specific measures to further optimize the innovation units.
The document discusses quantitative synthesis and meta-analysis methods. It defines key terms like effect measures, heterogeneity, and fixed and random effects models. It also covers combining data across studies, including calculating weighted averages and addressing issues like Simpson's paradox and heterogeneity that can impact meta-analyses. Worked examples are provided for binary outcomes, risk ratios, and calculating treatment effects from studies.
Use of the Crowdsourcing Methodology to Generate a Problem-Laboratory Test Kn...Allison McCoy
We evaluated the use of a previously described crowdsourcing methodology to generate a problem-laboratory test knowledge base, identifying appropriately linked problem-laboratory test pairs by clinicians during e-ordering. Existing evaluation metrics, including patient frequency and link ratio, were not correlated with appropriateness for 600 links manually validated. Further research is necessary to better evaluate these associations.
Enhancing Code Blue Performance with xAPIWatershed
Providing care to more than 500,000 patients each year, MedStar Health is the largest healthcare provider in the Washington, D.C./Maryland region. As an organization, they’re committed to providing the best care during emergency situations in which patients are in cardiopulmonary arrest (referred to as “Code Blues”).
During a Code Blue, the stakes are literally life and death—which is why it’s vital that MedStar resuscitation team members are well trained. Speed is vital in the seconds and minutes that follow a Code Blue, including the amounts of time for performing chest compressions and defibrillation and administering medications to a patient.
As a result, MedStar’s Code Blue training and learning resources have focused on improving clinician performance to reduce these times. However, MedStar didn’t have extensive information on the effectiveness of various training programs and learning resources.
Using Watershed and xAPI to aggregate and visualize data from multiple data sources, MedStar is now able to answer a range of questions about the usage and effectiveness of their training systems. They also have a better understanding of where they need to target their attention to improve performance. In particular, they can test the “chain of cause and effect” from training to simulations to final results.
This document presents a literature review and proposed solution to implement mock code blue training programs at a rural hospital. The purpose is to determine if regular mock code training improves provider performance during actual code blue events, compared to no additional training between certification periods. The literature review found that simulation and mock code training generally increases participant skills, confidence, and team performance during codes. Several studies showed mock training led to better initial test scores and skill retention over 1 year, compared to traditional training alone. The proposed solution is to implement a structured mock code blue training program at the hospital to reinforce skills and maintain competency between certification periods. The program aims to improve patient outcomes and provider satisfaction.
This document discusses key aspects of research design including replication, randomization, and blocking. It explains that replication involves repeating experiments to test if units respond the same way to treatments, allowing significance to be tested. Randomization ensures unbiased estimates by randomly assigning experimental units to treatments. Blocking balances groups and controls for extraneous variables through homogeneous blocking. The document provides an example experimental design layout.
Highlights from ExL Pharma's 5th Data Monitoring CommitteesExL Pharma
The document discusses highlights from a conference on Data Monitoring Committees/DSMBs in adaptive clinical trials. It provides examples of how DMCs/DSMBs can make recommendations to change aspects of a trial based on interim analyses of safety or efficacy data, such as increasing the sample size, dropping non-efficacious doses, or changing the primary endpoint. It emphasizes that such recommendations could potentially introduce bias if not properly considered and addressed. The role and composition of DMCs/DSMBs are also outlined.
The document describes how a new intelligence solution called the Readmission Prevention Solution was developed and implemented at Advocate Health Care using DMAIC (Define, Measure, Analyze, Improve, Control) principles. It details how the previous manual process for assessing readmission risk was time-consuming and inefficient. A new algorithm called the All-Cause Readmission Risk Algorithm was developed to automatically predict individual patient readmission risk based on data from their electronic health records. The new solution improved the workflow for care managers and provided additional features to help proactively mitigate readmission risk.
This document discusses issues related to treatment switching in clinical trials and potential methods for adjusting overall survival results when treatment switching occurs. It defines treatment switching as when a patient randomized to one treatment arm changes to the alternative treatment during a study. Several challenges and potential solutions are outlined, including determining whether adjustment is necessary, which adjustment method is appropriate, and assumptions of different adjustment methods. The document emphasizes transparently reporting assumptions, considering multiple adjustment methods, and using simulation to assess the impact of assumptions.
Comparisonof Clinical Diagnoses versus Computerized Test Diagnoses Using the ...Nelson Hendler
The Diagnostic Paradigm from www.MarylandClinicalDiagnostics.com was able to help the former Dean of Los Angeles Chiropractic College detect medical diagnoses which he had overlooked, and he later confirmed.
This document discusses the use of mock emergency drills or "mock codes" at Children's Hospital of The King's Daughters (CHKD) to prepare the code response team for real emergencies. The drills use high-fidelity patient simulators and aim to improve both clinical skills and teamwork. After each drill, the team debriefs to discuss what went well and how future performance can be improved. The program has been successful and its team-based approach could be applied to other hospital settings.
1) A five-year study led by Professor Amar Rangan compared surgical versus non-surgical treatment of broken shoulders and found no significant difference in outcomes between the two approaches.
2) The study was the largest clinical trial ever conducted on shoulder fractures and involved over 250 patients across 32 UK hospitals.
3) Finding no difference between surgical and non-surgical treatment could significantly reduce costs for the NHS as surgery is increasingly being used but may not be necessary for most shoulder fractures.
This document discusses key considerations for usability testing of medical devices. It notes that while usability activities are not clinical trials, they will still require extensive documentation and safety protocols. Researchers must understand relevant regulations and work closely with clients, medical experts, and Institutional Review Boards. Additional precautions are needed when testing with vulnerable participants or children. Moderator guides and data collection methods must be rigorously defined to meet regulatory requirements.
This document discusses quality control, quality assurance, and quality assessment in medical laboratories. It defines key terms like quality control, quality assurance, and quality assessment. Quality control refers to analytical measurements used to assess data quality, while quality assurance is an overall management plan to ensure data integrity. Quality assessment determines the quality of results generated by evaluating internal and external quality programs. The document outlines quality assurance and quality control processes like standard operating procedures, equipment and reagent validation, personnel competency, and documentation. It also discusses error types, control chart interpretation, and Westgard rules for evaluating quality control results.
Quality Control for Quantitative Tests by Prof Aamir Ijaz (Pakistan)Aamir Ijaz Brig
This document provides an overview of quality control and quality assurance processes in a chemical pathology laboratory. It discusses key terms like quality control, quality assurance, internal quality control, external quality assurance. It also describes different types of errors like random error and systematic error. The document explains statistical concepts like measures of central tendency, standard deviation, coefficient of variation. It discusses the Westgard rules for evaluating quality control results and triggering investigations into potential errors. The goal of the lecture is to describe the processes involved in quality management for chemical pathology laboratories.
7 Components to Medical Device Usability Testing SuccessMargee Moore
Despite the publication of various relevant guidance on medical device usability testing and standards for human factors testing confusion regarding best practices still exists. This presentation provides clear language and seven components for planning successful usability testing for medical device development.
The Ryde Hospital Fracture Clinic sees 80 patients each Friday afternoon. It requires significant hospital resources, including 15 staff from multiple disciplines for around 5 hours per week. An audit found average patient waiting times were 81 minutes in 2015. Only 47% of patients surveyed were very satisfied with the service due to long waits, a poor environment, and lack of information. The #FixIt project aims to improve the clinic efficiency and patient experience by 25% within 12 months by enhancing the environment, reducing waits, and improving patient flow and information.
This document discusses item analysis, which is used to select and reject test items based on difficulty and discrimination. It defines item difficulty as the percentage of examinees answering correctly. Discrimination refers to an item's ability to distinguish between high- and low-scoring examinees. Items are evaluated based on difficulty index, discrimination index, and whether they appropriately target the range of student abilities. Formulas are provided to calculate these indexes using data on correct responses from top- and bottom-scoring student groups.
This document discusses the characteristics of a good test. It defines test reliability as the consistency with which a test measures what it is intended to measure. Sources of measurement error that can impact reliability include objectivity of scoring, sampling of content, and temporal influences. Methods for estimating reliability include test-retest, alternate forms, inter-rater reliability, split-half, and Kuder-Richardson approaches. Factors like test length, range of talent, time limits, and difficulty of test items can affect reliability. Practicality and generalizability are also important characteristics of a good test.
Item analysis is a process to evaluate the quality and performance of individual test items. It involves analyzing students' responses to separate test items and their total test scores to evaluate item discrimination (how well items differentiate between more and less competent students) and difficulty. Item analysis can be done electronically by correlating student responses on individual items with their total scores, or manually by comparing responses of high-scoring and low-scoring student groups. The results are used to identify poorly performing items that may need revision or removal from tests.
This lecture recaps the previous lecture on exploratory factor analysis, and introduces psychometrics and (fuzzy) concepts and their measurement, including (operationalisation), reliability (particularly internal consistency of multi-item measures), validity and the creation of composite scores. See also https://en.wikiversity.org/wiki/Survey_research_and_design_in_psychology/Lectures/Psychometric_instrument_development
A good test should have the following key characteristics:
1. It should be a valid instrument that accurately measures what it is intended to measure as evidenced by various types of validity like content validity.
2. It should be a reliable instrument that consistently measures constructs and yields similar results over time as determined through methods like test-retest reliability.
3. It should be objective by eliminating personal bias and opinions of scorers so that different scorers arrive at the same score.
This document presents a literature review and proposed solution to implement mock code blue training programs at a rural hospital. The purpose is to determine if regular mock code training improves provider performance during actual code blue events, compared to no additional training between certification periods. The literature review found that simulation and mock code training generally increases participant skills, confidence, and team performance during codes. Several studies showed mock training led to better initial test scores and skill retention over 1 year, compared to traditional training alone. The proposed solution is to implement a structured mock code blue training program at the hospital to reinforce skills and maintain competency between certification periods. The program aims to improve patient outcomes and provider satisfaction.
This document discusses key aspects of research design including replication, randomization, and blocking. It explains that replication involves repeating experiments to test if units respond the same way to treatments, allowing significance to be tested. Randomization ensures unbiased estimates by randomly assigning experimental units to treatments. Blocking balances groups and controls for extraneous variables through homogeneous blocking. The document provides an example experimental design layout.
Highlights from ExL Pharma's 5th Data Monitoring CommitteesExL Pharma
The document discusses highlights from a conference on Data Monitoring Committees/DSMBs in adaptive clinical trials. It provides examples of how DMCs/DSMBs can make recommendations to change aspects of a trial based on interim analyses of safety or efficacy data, such as increasing the sample size, dropping non-efficacious doses, or changing the primary endpoint. It emphasizes that such recommendations could potentially introduce bias if not properly considered and addressed. The role and composition of DMCs/DSMBs are also outlined.
The document describes how a new intelligence solution called the Readmission Prevention Solution was developed and implemented at Advocate Health Care using DMAIC (Define, Measure, Analyze, Improve, Control) principles. It details how the previous manual process for assessing readmission risk was time-consuming and inefficient. A new algorithm called the All-Cause Readmission Risk Algorithm was developed to automatically predict individual patient readmission risk based on data from their electronic health records. The new solution improved the workflow for care managers and provided additional features to help proactively mitigate readmission risk.
This document discusses issues related to treatment switching in clinical trials and potential methods for adjusting overall survival results when treatment switching occurs. It defines treatment switching as when a patient randomized to one treatment arm changes to the alternative treatment during a study. Several challenges and potential solutions are outlined, including determining whether adjustment is necessary, which adjustment method is appropriate, and assumptions of different adjustment methods. The document emphasizes transparently reporting assumptions, considering multiple adjustment methods, and using simulation to assess the impact of assumptions.
Comparisonof Clinical Diagnoses versus Computerized Test Diagnoses Using the ...Nelson Hendler
The Diagnostic Paradigm from www.MarylandClinicalDiagnostics.com was able to help the former Dean of Los Angeles Chiropractic College detect medical diagnoses which he had overlooked, and he later confirmed.
This document discusses the use of mock emergency drills or "mock codes" at Children's Hospital of The King's Daughters (CHKD) to prepare the code response team for real emergencies. The drills use high-fidelity patient simulators and aim to improve both clinical skills and teamwork. After each drill, the team debriefs to discuss what went well and how future performance can be improved. The program has been successful and its team-based approach could be applied to other hospital settings.
1) A five-year study led by Professor Amar Rangan compared surgical versus non-surgical treatment of broken shoulders and found no significant difference in outcomes between the two approaches.
2) The study was the largest clinical trial ever conducted on shoulder fractures and involved over 250 patients across 32 UK hospitals.
3) Finding no difference between surgical and non-surgical treatment could significantly reduce costs for the NHS as surgery is increasingly being used but may not be necessary for most shoulder fractures.
This document discusses key considerations for usability testing of medical devices. It notes that while usability activities are not clinical trials, they will still require extensive documentation and safety protocols. Researchers must understand relevant regulations and work closely with clients, medical experts, and Institutional Review Boards. Additional precautions are needed when testing with vulnerable participants or children. Moderator guides and data collection methods must be rigorously defined to meet regulatory requirements.
This document discusses quality control, quality assurance, and quality assessment in medical laboratories. It defines key terms like quality control, quality assurance, and quality assessment. Quality control refers to analytical measurements used to assess data quality, while quality assurance is an overall management plan to ensure data integrity. Quality assessment determines the quality of results generated by evaluating internal and external quality programs. The document outlines quality assurance and quality control processes like standard operating procedures, equipment and reagent validation, personnel competency, and documentation. It also discusses error types, control chart interpretation, and Westgard rules for evaluating quality control results.
Quality Control for Quantitative Tests by Prof Aamir Ijaz (Pakistan)Aamir Ijaz Brig
This document provides an overview of quality control and quality assurance processes in a chemical pathology laboratory. It discusses key terms like quality control, quality assurance, internal quality control, external quality assurance. It also describes different types of errors like random error and systematic error. The document explains statistical concepts like measures of central tendency, standard deviation, coefficient of variation. It discusses the Westgard rules for evaluating quality control results and triggering investigations into potential errors. The goal of the lecture is to describe the processes involved in quality management for chemical pathology laboratories.
7 Components to Medical Device Usability Testing SuccessMargee Moore
Despite the publication of various relevant guidance on medical device usability testing and standards for human factors testing confusion regarding best practices still exists. This presentation provides clear language and seven components for planning successful usability testing for medical device development.
The Ryde Hospital Fracture Clinic sees 80 patients each Friday afternoon. It requires significant hospital resources, including 15 staff from multiple disciplines for around 5 hours per week. An audit found average patient waiting times were 81 minutes in 2015. Only 47% of patients surveyed were very satisfied with the service due to long waits, a poor environment, and lack of information. The #FixIt project aims to improve the clinic efficiency and patient experience by 25% within 12 months by enhancing the environment, reducing waits, and improving patient flow and information.
This document discusses item analysis, which is used to select and reject test items based on difficulty and discrimination. It defines item difficulty as the percentage of examinees answering correctly. Discrimination refers to an item's ability to distinguish between high- and low-scoring examinees. Items are evaluated based on difficulty index, discrimination index, and whether they appropriately target the range of student abilities. Formulas are provided to calculate these indexes using data on correct responses from top- and bottom-scoring student groups.
This document discusses the characteristics of a good test. It defines test reliability as the consistency with which a test measures what it is intended to measure. Sources of measurement error that can impact reliability include objectivity of scoring, sampling of content, and temporal influences. Methods for estimating reliability include test-retest, alternate forms, inter-rater reliability, split-half, and Kuder-Richardson approaches. Factors like test length, range of talent, time limits, and difficulty of test items can affect reliability. Practicality and generalizability are also important characteristics of a good test.
Item analysis is a process to evaluate the quality and performance of individual test items. It involves analyzing students' responses to separate test items and their total test scores to evaluate item discrimination (how well items differentiate between more and less competent students) and difficulty. Item analysis can be done electronically by correlating student responses on individual items with their total scores, or manually by comparing responses of high-scoring and low-scoring student groups. The results are used to identify poorly performing items that may need revision or removal from tests.
This lecture recaps the previous lecture on exploratory factor analysis, and introduces psychometrics and (fuzzy) concepts and their measurement, including (operationalisation), reliability (particularly internal consistency of multi-item measures), validity and the creation of composite scores. See also https://en.wikiversity.org/wiki/Survey_research_and_design_in_psychology/Lectures/Psychometric_instrument_development
A good test should have the following key characteristics:
1. It should be a valid instrument that accurately measures what it is intended to measure as evidenced by various types of validity like content validity.
2. It should be a reliable instrument that consistently measures constructs and yields similar results over time as determined through methods like test-retest reliability.
3. It should be objective by eliminating personal bias and opinions of scorers so that different scorers arrive at the same score.
This document discusses item analysis, which examines student responses to test questions. There are two types: quantitative, which uses statistics like difficulty and discrimination indices, and qualitative, which involves expert review. Difficulty index measures the proportion of students answering correctly, ranging from very difficult to very easy. Discrimination index measures an item's ability to distinguish high-scoring from low-scoring students. Qualitative analysis involves experts proofreading tests for issues like ambiguity before administration.
This document discusses analyzing test items to determine their difficulty and ability to differentiate between high and low scoring examinees. It provides guidelines for interpreting facility and discrimination indices. The facility index represents the percentage of examinees answering an item correctly, and discrimination is a number indicating how well an item distinguishes high and low performers. Examples are given of calculating these indices and analyzing item performance based on the results.
This presentation discusses the concepts of validity and reliability as they relate to examination scores. Validity refers to whether a test measures what it intends to measure, and there are three types: content validity, construct validity, and criterion validity. Reliability refers to the consistency and stability of test scores and is concerned with minimizing errors. A test can be reliable but not valid, as reliability is a necessary but not sufficient condition for validity. Both concepts are important for determining the quality and usefulness of examination scores.
Educational psychology- Test and measurementJocelyn Camero
1. The document discusses the historical development of educational testing and measurement from ancient China to the modern era. It outlines key figures and their contributions, including the development of intelligence tests, aptitude tests, and personality tests in the late 19th and early 20th centuries.
2. Measurement aims to determine a student's abilities, knowledge, and achievement, while evaluation assesses the quality or worth of their learning. Standardized tests are more rigorously developed and validated than teacher-made tests.
3. Measurement and evaluation serve instructional, administrative, and research purposes such as student placement, curriculum development, and determining teacher and program effectiveness. They are important tools but also present challenges in assessing complex and changing human attributes.
The document discusses item analysis, which is the analysis of multiple choice questions on a test. It explains the need for item analysis and its advantages. Some key tools in item analysis are difficulty index, discrimination index, and distracter effectiveness. The document outlines the procedure for conducting item analysis, which involves ranking test papers, calculating difficulty and discrimination indexes using formulas, and evaluating questions based on the indexes.
The document discusses item analysis, which is the process of examining test responses to evaluate the quality of individual test items and the test itself. It aims to improve the effectiveness of items used on future tests. Key aspects covered include item difficulty index, item discrimination, and analyzing items based on how well they measure the effects of instruction. The document provides examples and interpretations for calculating various metrics used in item analysis.
This short SlideShare presentation explores a basic overview of test reliability and test validity. Validity is the degree to which a test measures what it is supposed to measure. Reliability is the degree to which a test consistently measures whatever it measures. Examples are given as well as a slide on considerations for writing test questions that demand higher-order thinking.
This document discusses item analysis, which is a procedure used to evaluate test questions and assess whether they are effectively measuring the intended construct. It defines key terms like item difficulty, facility value, discrimination index, and discusses the purposes and steps of performing an item analysis. The purposes include selecting the best questions, identifying weaknesses, and improving the quality and effectiveness of assessments. The steps involve scoring tests, dividing students into high and low groups, calculating difficulty and discrimination indices for each item, and using the results to revise tests.
Psychological test meaning, concept, need & importancejd singh
This document discusses psychological testing. It defines psychological testing as a standardized measure of a person's behavior that is used to observe differences among individuals. It notes that tests measure constructs like abilities, functioning, and personality. The document outlines the objectives, need, importance and types of psychological tests. It describes the major characteristics of tests including standardization, norms, reliability and validity. Finally, it provides examples of commonly used psychological tests.
A good test should be valid and reliable. Validity refers to how well a test measures what it intends to measure. There are three main types of validity: content validity, criterion-related validity, and construct validity. Reliability refers to the consistency of test scores. Sources of measurement error can affect reliability. Reliability is estimated through methods like test-retest, parallel forms, and internal consistency. Item analysis evaluates item difficulty and discrimination to identify questions that need improvement.
The document discusses the process of item analysis and validation for ensuring a useful and functional test. It describes analyzing test items for difficulty index and discrimination index based on the performance of upper and lower scoring students. Items are categorized by difficulty and discrimination for revision or removal. Validation involves checking the test's content validity with experts, criterion-related validity by comparing to other tests, and reliability using methods like split-half reliability. The goal is to analyze how well items measure the intended objectives and how consistently the test scores perform.
Test validity refers to validating the appropriate use of a test score for a specific context or purpose. Validity is determined by studying test results in the intended setting of use, as a test may be suitable for one purpose but not another. Validity is a matter of degree rather than an absolute quality, and establishing validity requires empirical evidence and theoretical justification that the intended inferences from test scores are adequate and appropriate.
Validity & reliability an interesting powerpoint slide i createdSze Kai
This document discusses the importance of validity and reliability in testing. It provides an example of a weighing test to question validity, noting that environment, respondent, and test administration can affect reliability. Both validity and reliability are important for a good test, which requires balancing these factors. Teachers should understand both concepts when creating assessments.
This document discusses perspectives on multiple choice question (MCQ) assessment. It provides general thoughts on how MCQs can be used formatively or summatively. It then describes processes for designing, establishing validity and reliability, and providing feedback for MCQs. Specific examples are given from the NCLEX-RN test plan to illustrate steps in writing MCQs, including selecting areas of focus, writing stems and keys/distractors. Case scenarios are also used to demonstrate how to write MCQs assessing different nursing concepts.
Nursing Research and Evidence Based Practice DQ.pdfbkbk37
This document describes a study that aimed to increase nurse compliance with bedside shift report and improve patient satisfaction scores. A standardized approach to bedside report was implemented using Lewin's change theory. Patient satisfaction scores and nurse compliance with bedside report were compared before and after implementation. Results showed improved nurse compliance with bedside report and higher patient satisfaction scores post-implementation, indicating the standardized approach helped increase nurse compliance and improved the patient experience.
This document discusses reliability in healthcare and strategies to improve it. It defines reliability as the consistency of achieving intended outcomes. Processes can be measured on a scale of reliability from 10-1 to 10-6. Strategies to improve reliability include preventing failures through standardization, identifying and mitigating failures through redundancy and decision aids, and redesigning systems based on understanding failure modes. Bundles, or groups of interventions that improve outcomes when implemented together, can also enhance reliability. The example of a diabetes care bundle includes various tests and education.
This document provides resources and instructions for conducting a root cause analysis of a medical error or safety issue related to medication administration. Students are asked to choose a safety concern from a previous assessment or personal experience and analyze the root cause. They then develop a safety improvement plan using best practices and existing organizational resources. The goal is to demonstrate understanding of root cause analysis and developing plans to improve patient safety regarding medication administration.
Getting Right with The Joint Commission's Communication GoalSpok
In pursuit of its mission, The Joint Commission audits and accredits more than 21,000 healthcare organizations and programs for clinical excellence and patient safety. The organization also publishes an annual list of National Patient Safety Goals (NPSG) highlighting specific areas of focus for improvement within the healthcare environment. Improving communications is included in the list as a high priority because communication delays and errors can have serious consequences, for patients as well as hospitals.This webinar explores some of the communication challenges hospitals face and how technology can help them comply with The Joint Commission’s communication goal (NPSG 2).
Susan Burnett: Measuring and monitoring safety in health careQualityWatch
The document discusses key issues for patient safety over the next decade. It notes that measuring safety in healthcare has been challenging due to fragmented safety information across organizations. While some metrics like mortality rates can provide insights, they only offer a partial view of overall patient harm. The document calls for improved integration and customization of safety data so it can be better understood and used for proactive improvement at different organizational levels. Developing ways to anticipate safety issues before they occur and treating safety as an organizational rather than just clinical concern are also emphasized.
This document provides an overview of a quantitative research study that evaluated the impact of implementing a standardized bedside shift report process. The study aimed to increase nurse compliance with bedside reporting and improve patient satisfaction scores. A change management strategy was used to hardwire bedside reporting, which addressed barriers and provided staff education. After implementing these changes, the study found improved nurse compliance with bedside reporting and higher patient satisfaction scores on the intervention units.
Hourly rounding has been shown to reduce patient fall rates in hospital settings compared to using tab or bed alarms alone. Study 1 found that implementing hourly rounding on inpatient units resulted in a 23% reduction in falls, though it was not statistically significant. Study 2 was a systematic review finding that most studies reported hourly rounding as an effective way to reduce fall rates in hospitals. Together these studies indicate that hourly rounding may help prevent falls more than other interventions alone and has been associated with reductions in fall rates in various hospital settings.
This study investigated the influence of hospital safety climate on patient satisfaction and nursing care quality. Data was collected from nurses and patients at an Egyptian emergency hospital using questionnaires on safety climate, patient satisfaction, and quality of nursing care. The results found that 50% of respondents reported a low safety climate score and only 29.5% of patients were highly satisfied. Nurses reported that the quality of care was low for 69% of patients. A significant relationship was found between safety climate and both patient satisfaction and nursing care quality. The study concluded that improving the hospital safety climate can positively influence patient outcomes like satisfaction and quality of care.
Dr Ayman Ewies - Clinical audit made easyAymanEwies
This document provides an overview of how to conduct a clinical audit. It defines clinical audit as a process used by healthcare professionals to systematically review, evaluate and improve patient care. The document outlines the key components of an audit, including choosing a topic, selecting standards, planning methodology, collecting data, analyzing results, and implementing changes. It emphasizes that the goal of audit is to compare current practices to standards in order to enhance quality of care and patient outcomes.
Test Bank For Lewis's Medical- Surgical Nursing, 12th Edition by Mariann M. H...nursing premium
A Test bank is a ready-made electronic Q&A testing resource that is tailored to the contents of an individual textbook. Feedback is often provided on answers given by students, containing page references to the book.
This document provides an overview of a presentation on the science of safety training. Some key points:
- The presenter has over 24 years of experience in healthcare and various safety-related certifications and memberships.
- The presentation covers topics like historical context of patient safety, learning from defects, and celebrating safety. It also discusses tools to measure safety culture like the Safety Attitudes Questionnaire.
- The presentation describes how the Comprehensive Unit-based Safety Program (CUSP) was implemented at Tawam Hospital. Initial assessments found issues like hierarchies and a tendency to blame individuals for errors. CUSP helped establish a culture focused on systems and teamwork.
The document describes a quality improvement project to increase hand hygiene compliance at a hospital. Baseline data showed compliance was only 26%. A team analyzed the problem and identified solutions. These included an awareness training program, educational materials, ensuring hand hygiene supplies, and involving leaders. Regular audits and feedback to staff on compliance will also be implemented. The plan is to improve compliance to 90% by March 2014 through these multi-pronged interventions.
Here is a professionally written paragraph on the topic with an APA formatted citation:
Alarm fatigue poses a significant patient safety risk in healthcare facilities. When nurses are inundated with a high volume of alarms, some of which are clinically irrelevant, it can lead to desensitization and delays in response to critical alarms (Sendelbach & Jepsen, 2013). Nuisance or false-positive alarms are a key driver of alarm fatigue, as they do not indicate an actual adverse patient condition but still interrupt care providers (Graham & Cvach, 2010). The overuse of alarms has created a "cry wolf effect" wherein nurses start to mistrust clinical alarm systems due to the frequency of irrelevant alerts (Cvach, 2010
This study evaluated the association between leadership walkrounds (WRs) and caregiver assessments of patient safety climate and risk reduction across 49 hospitals. WRs involve hospital leaders visiting clinical units to openly discuss safety issues with staff. The study found that units where ≥60% of caregivers reported exposure to at least one WR had significantly higher safety climate scores, greater reported risk reduction, and more feedback on actions taken compared to units with <60% exposure. Higher rates of WR participation at the unit level were positively associated with more favorable caregiver assessments of patient safety culture and outcomes.
This document describes a study that evaluated a new framework for end-of-life care and withdrawal of treatment on an intensive care unit. Staff completed questionnaires before and after the introduction of the framework to assess changes in knowledge, quality of care, and satisfaction. Results showed improvements in staff knowledge, increased confidence that patients' comfort needs were being met, and greater satisfaction with end-of-life care processes after implementing the framework. The study concludes the framework was associated with enhanced end-of-life care delivery and communication on the ICU.
Objectives:
By the end of this call, you will be able to:
•Describe the processes of Root-Cause Analysis (RCA) and Multi-Incident Analysis (MIA) and their role in quality improvement
•Compare and contrast the different approaches to collecting hospital-acquired VTE data
•Identify an approach suitable for improving patient safety at your institution
Capstone Project Change Proposal Presentation for Faculty Review a.docxbartholomeocoombs
Capstone Project Change Proposal Presentation for Faculty Review and Feedback
Assessment Description
Create a 10-15 slide Power Point presentation of your evidence-based intervention and change proposal to be disseminated to an interprofessional audience of leaders and stakeholders. Include the intervention, evidence-based literature, objectives, resources needed, anticipated measurable outcomes, and how the intervention would be evaluated. Submit the presentation in the digital classroom for feedback from the instructor.
PICOT Question (See other file uploaded)
Interventions
Falling incidences can cause several complications, including health care costs, severe health issues, immobility, etc. With the severity of this issue, appropriate interventions should take place. In this context, proper monitoring is one of the significant interventions to prevent this incidence (Huang et al., 2020). Hence, incorporating educated and efficient technicians while providing patient care can be an essential step. Yet, due to decreased mobility or functionality, older people often require help in doing basic activities, in this aspect, providing help to the patients while changing to hospital-approved gowns (Liu-Ambrose et al., 2019). In addition, one significant and effective intervention is providing quick education to the patient regarding fall prevention strategies (Radecki, Reynolds & Kara, 2018). Another critical aspect is providing a safe environment for clinical care. Outpatient clinics should improve their workflow and environmental condition, such as removing hazardous materials, and keeping the floor clean and dry, so that the clinic can provide a safe area for older patients. These interventions can help prevent falls (Guirguis-Blake et al., 2018).
Benchmark - Capstone Change Project Objectives
1. Prevent elderly falls in an outpatient radiology clinic.
Rationale: Falls occur as age advances due to individual risk factors or environmental factors. For example, gait or balance deficits, chronic conditions, medications, and footwear the patient is wearing. Assisting these patient populations can prevent falls in the department.
2. Educate patients and people in the community on how to prevent falls.
Rationale: Educate patients regarding physical changes and chronic health conditions that cause or probability of falls.
3. Provide a safe environment for clinical care in the outpatient clinical setting.
Rationale: Design the clinical area accessible to patients in wheelchairs, with assistive devices, and with mobility deficits. Have handrails on walls and hallways for support, clean, non-skid floors, and lighted pathways in hallways, rooms, and bathrooms.
4. A patient care technician (PCT) is available in the outpatient clinical area for patients.
Rationale: Having a PCT in the clinical area, especially around the dressing rooms, would benefit the patients needing help when changing to hospital-approved gowns and monitoring patients for risk.
Similar to TESTING Validity: Internal Validity of Test Items and Item Analysis (20)
NSH Simulation and Chemotherapy ONS Poster_PRINT VEND (1)Melissa Jo Powell
A hospital developed a chemotherapy training program for nurses using simulation with standardized patients. 20 nurses underwent online oncology training and two 8-hour simulation sessions practicing chemotherapy administration and communication skills. Simulation stations included chemotherapy spills, hypersensitivity reactions, and new drug administrations. Nurses were assessed on safety and communication skills. Evaluations found growth in safety awareness but some struggled with empathetic communication. While simulation improved skills, its ability to fully assess real-world competency is unclear. Developing a validated assessment of oncology communication skills is important for specialty training.
Nursing Education: Academia to workplace - standardization within training. ...Melissa Jo Powell
Nursing Education: Academia to workplace. Training costs and performance issues related to lack of standardization. Constraints to standardization within training. Heatt 2014
Partnering with Patients as Teachers for Nurse ResidentsMelissa Jo Powell
Presentation at International Conference on Patient and Family Centered Care describing content and program evaluation data using simulation to teach communication skills and partnering with patients as teachers.
The alternative to ppt, case studies for teaching and learningMelissa Jo Powell
This document describes an alternative to traditional PowerPoint lectures called brain-based learning. It explains that PowerPoint is only effective for delivering new information and that real learning occurs when information is actively gathered, categorized, applied, and reviewed. Brain-based learning incorporates these active learning strategies and is more effective for retention. It suggests using goals, case studies, problem-based learning, peer instruction, and having learners apply information to achieve deeper learning compared to passive receipt of information from PowerPoint alone. The document provides an example of how to structure a brain-based learning session using initial goals, breakout groups to collaboratively solve cases, and reinforcement of key ideas.
The document summarizes the development and evaluation of a training program to improve nurse-patient communication skills at Vanderbilt University Medical Center. It describes conducting a needs assessment that found opportunities to better educate patients. A class was designed using experiential learning theory, with a presentation, practice simulations, and commitment to use "teach back" technique. Evaluations assessed satisfaction, learning, demonstration of skills, and observations of teach back use, finding greater use by nurses who completed the training.
This document outlines a plan to re-evaluate and redesign a unit-based orientation program. A needs assessment was conducted to identify gaps, such as high blood culture contamination and hemolyzation rates. Goals and objectives for the new program were set, such as increased nurse satisfaction and improved quality metrics. A gap analysis identified needs for additional resources like online modules, simulation training, and preceptor training. The redesign proposes dividing orientation into online learning, classroom sessions, skills labs and simulation, and unit preceptorship. Assessment methods like surveys and quality metric tracking are discussed to evaluate the new program's return on investment.
Program Evaluation of In-Situ Simulation Team TrainingMelissa Jo Powell
In-situ team training is just in time learning. It provides clinical staff the opportunity to practice in their own work environment with their own teams. The demand from leaders in organizations is that educators prove ROI. Here is a framework to proving team training works!
Just-in-Time Education for staff nurses about teaching patients about CHFMelissa Jo Powell
The document discusses a heart failure education transitions of care project at VUMC. It aims to develop standardized processes for heart failure patient education across care settings using consistent tools and content. This includes developing standardized educational materials located in a central area, defining reliable processes involving key stakeholders, and delivering evidence-based education using teach back principles. The project uses a "Heart Failure Bull's Eye" tool to engage patients and assess their understanding, with the goal of progressing them toward self-care. Teach back is emphasized as the best way to ensure understanding, and elements of effective patient teaching are outlined.
Oncology Nursing Society 2011 Simulation PresentationMelissa Jo Powell
During a chemotherapy simulation, nurses will demonstrate safe, patient-centered care when administering chemotherapy. They will employ evidence-based teaching techniques to ensure patient understanding and safety. Nurses will identify potential adverse events and manage interventions if any occur. During debriefing, nurses will reflect on their experience to conceptualize newly learned ideas. The simulation consists of three scenarios focusing on chemotherapy order review, safety processes, administration techniques, and management of complications.
TeamSTEPPS 2013 Presentation "Create your own simulations and evaluate them"Melissa Jo Powell
The document discusses a presentation on using simulations and debriefs to evaluate team performance. Some key points:
- The presentation covered developing TeamSTEPPS training scenarios, using simulations and checklists to evaluate behaviors, and identifying the importance of reflection and psychological safety during debriefs.
- A case study was presented on using simulations to address delays in calling rapid response teams at a hospital. Post-training surveys found that simulations helped build confidence and that new communication techniques would be employed.
- Debriefings after simulations were found to promote self-reflection and identification of barriers to improve performance. Teams that debriefed were shown to perform up to 40% better.
Oncology Nursing Society 2013 Teach back poster presentationMelissa Jo Powell
This document describes a training program that used simulation and standardized patients to teach nurses effective communication techniques for patient education, including teach-back. Nurses participated in an online module on teach-back and chemotherapy education, then practiced their skills with standardized patients in a simulated chemotherapy lab. Nurses' communication was evaluated using checklists and video review. Feedback was provided. Surveys found nurses agreed the training would help them better educate patients in their practice. The goal was to improve nurses' patient education and use of teach-back to validate patient understanding.
An interprofessional team at Vanderbilt University Medical Center participated in an in situ simulation training on responding to medical emergencies. The training included nurses, respiratory therapists, nurse practitioners, medical residents, and care partners. It focused on recognizing triggers for activating the rapid response team, performing CPR, and improving team communication. Shortly after one care partner participated, a real code was called and he was able to apply the skills and communication practices learned, which helped save the patient's life according to the code leader.
Evaluation of skills lab on blood culture contamination performance after tra...Melissa Jo Powell
This document discusses an education intervention for reducing blood culture contamination. It describes providing an online module and hands-on skills lab training to nurses on proper technique for blood culture collection. The skills lab involves teacher-guided hands-on demonstration and practice to help nurses develop correct muscle memory and prevent contamination. Contamination can lead to false positive diagnoses for bloodstream infections, unnecessary treatment, and costs of $23,000 per incorrect diagnosis for patients with central lines. Graphs show that contamination rates decreased on the unit that received the training compared to sister units.
A workshop hosted by the South African Journal of Science aimed at postgraduate students and early career researchers with little or no experience in writing and publishing journal articles.
Strategies for Effective Upskilling is a presentation by Chinwendu Peace in a Your Skill Boost Masterclass organisation by the Excellence Foundation for South Sudan on 08th and 09th June 2024 from 1 PM to 3 PM on each day.
বাংলাদেশের অর্থনৈতিক সমীক্ষা ২০২৪ [Bangladesh Economic Review 2024 Bangla.pdf] কম্পিউটার , ট্যাব ও স্মার্ট ফোন ভার্সন সহ সম্পূর্ণ বাংলা ই-বুক বা pdf বই " সুচিপত্র ...বুকমার্ক মেনু 🔖 ও হাইপার লিংক মেনু 📝👆 যুক্ত ..
আমাদের সবার জন্য খুব খুব গুরুত্বপূর্ণ একটি বই ..বিসিএস, ব্যাংক, ইউনিভার্সিটি ভর্তি ও যে কোন প্রতিযোগিতা মূলক পরীক্ষার জন্য এর খুব ইম্পরট্যান্ট একটি বিষয় ...তাছাড়া বাংলাদেশের সাম্প্রতিক যে কোন ডাটা বা তথ্য এই বইতে পাবেন ...
তাই একজন নাগরিক হিসাবে এই তথ্য গুলো আপনার জানা প্রয়োজন ...।
বিসিএস ও ব্যাংক এর লিখিত পরীক্ষা ...+এছাড়া মাধ্যমিক ও উচ্চমাধ্যমিকের স্টুডেন্টদের জন্য অনেক কাজে আসবে ...
How to Fix the Import Error in the Odoo 17Celine George
An import error occurs when a program fails to import a module or library, disrupting its execution. In languages like Python, this issue arises when the specified module cannot be found or accessed, hindering the program's functionality. Resolving import errors is crucial for maintaining smooth software operation and uninterrupted development processes.
Executive Directors Chat Leveraging AI for Diversity, Equity, and InclusionTechSoup
Let’s explore the intersection of technology and equity in the final session of our DEI series. Discover how AI tools, like ChatGPT, can be used to support and enhance your nonprofit's DEI initiatives. Participants will gain insights into practical AI applications and get tips for leveraging technology to advance their DEI goals.
Thinking of getting a dog? Be aware that breeds like Pit Bulls, Rottweilers, and German Shepherds can be loyal and dangerous. Proper training and socialization are crucial to preventing aggressive behaviors. Ensure safety by understanding their needs and always supervising interactions. Stay safe, and enjoy your furry friends!
हिंदी वर्णमाला पीपीटी, hindi alphabet PPT presentation, hindi varnamala PPT, Hindi Varnamala pdf, हिंदी स्वर, हिंदी व्यंजन, sikhiye hindi varnmala, dr. mulla adam ali, hindi language and literature, hindi alphabet with drawing, hindi alphabet pdf, hindi varnamala for childrens, hindi language, hindi varnamala practice for kids, https://www.drmullaadamali.com
How to Build a Module in Odoo 17 Using the Scaffold MethodCeline George
Odoo provides an option for creating a module by using a single line command. By using this command the user can make a whole structure of a module. It is very easy for a beginner to make a module. There is no need to make each file manually. This slide will show how to create a module using the scaffold method.
Introduction to AI for Nonprofits with Tapp NetworkTechSoup
Dive into the world of AI! Experts Jon Hill and Tareq Monaur will guide you through AI's role in enhancing nonprofit websites and basic marketing strategies, making it easy to understand and apply.
A review of the growth of the Israel Genealogy Research Association Database Collection for the last 12 months. Our collection is now passed the 3 million mark and still growing. See which archives have contributed the most. See the different types of records we have, and which years have had records added. You can also see what we have for the future.
This slide is special for master students (MIBS & MIFB) in UUM. Also useful for readers who are interested in the topic of contemporary Islamic banking.
Chapter 4 - Islamic Financial Institutions in Malaysia.pptx
TESTING Validity: Internal Validity of Test Items and Item Analysis
1. Page 1
Course VALIDITY & ASSESSMENT
Learner/Practitioner Assessment Project
Purpose of the Assessment: The purpose of the test was to assess knowledge of nurses
completing an in-service training about “Patient Safety”.
Persons being assessed: Learners who took the “Patient Safety” test are nursing staff on the
8th Floor Vanderbilt University Medical Center with varying degrees of experience in and outside
of Vanderbilt and total years of nursing experience. The learners were attendees of the
inservice and the inservice would count towards their 4 hours annual required inservice time.
This works as a motivator to get nurses to attend inservices.
Framework – content: The content for the inservice was derived from current findings
published by the Institute for Healthcare Improvement Safety Initiative called Transforming Care
at the Bedside (TCAB) (Viney et al. 2006). The concepts in the inservice were presented to
staff to help explain key quality and safety concepts about inpatient acute hospital falls, hospital
medication errors, adverse events, and nosocomial pressure ulcers. One arm of
recommendations stemming from TCAB is that nurses and teams benefit from current
knowledge and awareness about evidenced based research regarding patient safety and
hospital quality improvement.
Framework – measurement and outcome level: The assessment for this inservice was
criterion-referenced framework. The level of learning outcome being assessed is 3A, Learning:
Declarative Knowledge measured by posttest (Moore et al. 2009). The passing score for this
test was 70%. If learners did not achieve a score of 70% or greater, they did not receive a full
hour of inservice time. Out of 37 taking the assessment, 33 scored above this 70% mark.
Data Collection tool:12 item True or False questions online web-based posttest. The link was
emailed to each attendee the Friday following the 4 separate nightshift and dayshift inservice
events. The test was not proctored, there was no discussion of using other resources and
attendees were told that it would be based on the power point lecture. They were told that 70%
would be passing.
Person(s) completing the data collection tool: Participants in the inservice complete the test.
Frequency of data collection and the sample: The test was assigned once after the
inservices and taken online within two weeks of inservice for full inservice time. It is a one time,
no remediation test. 100% of inservice attendees took test.
Descriptive Results from the data set:
2. Page 2
One leaner did not answer one item. 2 people are missing from some of this data. One learner
did not answer every question and another learner was not a nurse but an ancillary staff
member. Their data was removed from reliability testing and item analysis. This first bar chart
describes all test takers, their percent of items correct, the mean of 91%, and standard deviation
of 16.5.
TABLE 1.
Percent Correct
Number of
Learners
Percent Correct
3. Page 3
TABLE 2.
All Learners Percent Correct
Frequency Percent
Valid Percent Cumulative
Percent
33.33
1
2.7
2.7
2.7
41.67
1
2.7
2.7
5.4
58.33
1
2.7
2.7
8.1
66.67
1
2.7
2.7
10.8
75.00
2
5.4
5.4
16.2
83.33
1
2.7
2.7
18.9
91.67
7
18.9
18.9
37.8
100.00 23
62.2
62.2
100.0
Total
100.0
100.0
Percent
Correct
Valid
37
TABLE 3.
Reliability Statistics
Cronbach's
Alpha
Cronbach's
N of Items
Alpha Based
on
Standardized
Items
.855
.866
11
In Table 3 The number of items for which we could perform reliability testing is 11. One item is
not included in the reliability measure because not all learners answered the question.
4. Page 4
TABLE 4.
Mean
Std. Deviation N
.7429
.44344
35
device_pu_scored
Device related pressure
ulcers may be
unpreventable when a
patient is compromised .8857
nutritionally, has poor
perfusion and must
have device secured in
place for life support.
.32280
35
toiletting_scored Per
Vanderbilt policy, if you
assist a patient to the
.9429
toilet, you must stay with
them.
.23550
35
reimbursed_scored As
of 2012, hospitals are
reimbursed related to
their patient safety
scores.
.9143
.28403
35
rrt_scored Rapid
Response Systems
were designed to
prevent failure to
rescue. Calling Rapid
.9714
Response for first
recognition of trigger is
the reliable way to
ensure Rapid Response
Systems remain reliable.
.16903
35
gait_belts_scored Gait
belts are used to
prevent falls.
5. Page 5
reliability_scored
Hospital reliability and
nursing communication
related to patient safety
must include checklists, .9714
standardized
communication formats
and information system
checks.
.16903
35
transfusion_scored
Transfusion errors begin
.9714
at the point of collecting
the specimen.
.16903
35
ebp_fall_scored Some
hospitals are using hip
protectors and helmets .9714
on patients who are
known for falling.
.16903
35
stop_pu_scored
Pressure ulcers are
prevented by
appropriate surface
selection, regular
repositioning and
.8571
turning, optimizing
temperature control, and
preventing
moisture/providing
moisture barrier
products.
.35504
35
fall_liability_scored
Patients who fall who
have stated a high
desire for
independence, who
.8857
have stated they do not
have to use the call bell,
can not hold us liable if
they fall and are hurt.
.32280
35
6. Page 6
zero_scored Falls are
preventable and
achieving zero falls has .8286
been attained in other
hospitals.
.38239
35
In Table 4 the item statistics are presented. The mean percent of learners getting the item
correct for each item is in the column labeled mean. 2 people are missing from this data. One
learner did not answer every question and another learner was not a nurse but an ancillary staff
member. Their data was removed from reliability testing and item analysis. The first item “Gait
belts are used to prevent falls” is a false statement. I suspect that it may have been a tricky
question. A true statement would be “Gait belts are used to prevent injury during falls.” I think
the reason people got it wrong is that it is just a little bit tricky.
TABLE 5.
Item-Total Statistics
Scale Mean if Scale
Item Deleted Variance if
Item Deleted
Corrected
Item-Total
Correlation
Squared
Multiple
Correlation
Cronbach's
Alpha if Item
Deleted
gait_belts_scored Gait
belts are used to prevent 9.2000
falls.
2.929
.690
.
.834
device_pu_scored
Device related pressure
ulcers may be
unpreventable when a
patient is compromised 9.0571
nutritionally, has poor
perfusion and must have
device secured in place
for life support.
3.291
.664
.
.833
toiletting_scored Per
Vanderbilt policy, if you
assist a patient to the
9.0000
toilet, you must stay with
them.
3.471
.737
.
.832
7. Page 7
reimbursed_scored As
of 2012, hospitals are
reimbursed related to
their patient safety
scores.
9.0286
3.499
.558
.
.842
rrt_scored Rapid
Response Systems were
designed to prevent
failure to rescue. Calling
Rapid Response for first 8.9714
recognition of trigger is
the reliable way to
ensure Rapid Response
Systems remain reliable.
3.793
.533
.
.848
reliability_scored
Hospital reliability and
nursing communication
related to patient safety
must include checklists, 8.9714
standardized
communication formats
and information system
checks.
3.793
.533
.
.848
transfusion_scored
Transfusion errors begin
8.9714
at the point of collecting
the specimen.
3.852
.441
.
.852
ebp_fall_scored Some
hospitals are using hip
protectors and helmets
on patients who are
known for falling.
3.970
.260
.
.859
8.9714
8. Page 8
stop_pu_scored
Pressure ulcers are
prevented by
appropriate surface
selection, regular
repositioning and
9.0857
turning, optimizing
temperature control, and
preventing
moisture/providing
moisture barrier
products.
3.081
.775
.
.822
fall_liability_scored
Patients who fall who
have stated a high
desire for independence,
who have stated they do 9.0571
not have to use the call
bell, can not hold us
liable if they fall and are
hurt.
3.114
.838
.
.817
zero_scored Falls are
preventable and
achieving zero falls has 9.1143
been attained in other
hospitals.
3.692
.228
.
.876
Table 5 describes item statistics. Each Cronbach’s Alpha is very good and is calculated to
predict internal consistency. This can serve as an index of consistency and an approximation to
test-retest reliability.
Measurement Characteristics:
Reliability
We are able to come up with measures for internal consistency such as calculating the test
item intercorrelations and reject or accept the questions with the highest or lowest reliability
coefficient. We were able to accept all items and the last item makes no difference.
My index of consistency used was the Cronbach’s Alpha. It was 0.86 for 11 test items. This
is a very good level of internal consistency. The Standard deviation for this test is 16.54. The
9. Page 9
mean score is 91.2. This means the average test score of all test participants was 91.2%. The
standard error of measurement (SEM) is an estimate of error to use in interpreting an
individual’s test score. A test score is an estimate of a person’s “true” test performance. Using a
reliability coefficient and the test’s standard deviation, we can calculate this value:
SEM =sd 1 – r) The Standard Error of Measurement = 6.40. The SEM of the test scores
of the test participants was 6.40.
With 99% confidence the mean true test score lies between 74.69 and 100. (16.51) With
95% confidence the mean true test score lies between 78.66 and 100. (12.54)
Validity
The validity of this assessment is that this assessment was a measure of how much was
understood about concepts and ideas presented in a staff inservice about safety and quality.
Nurses who do not have a general understanding of key ideas about safety quality may have
less motivation implementing new processes and strategies to improve quality and safety.
Decisions: Those that score 70% in this assessment will be given a full inservice hour towards
their total 4 hours required by the department. If they score less than 70% they only receive a
half hour. This assessment would be formative in that it would give feedback to learners about
where they have weakness or where they could do further study.
The content validity was assured because each question on the test was exactly quoted
from the inservice and from the power point slides shown at the inservice. The content was
related to the learning objectives given at the beginning of the class.
Construct validity about the content of the inservice is related to the importance of
understanding key points about patient safety and quality in the hospital setting. These ideas
are also key points reflected in Joint Commissions National Patient Safety Goals. Vanderbilt
University Medical Center also has 5 Pillar Goals for 2012 that relate to patient safety and
quality including preventing falls and pressure ulcers. The questions came directly from the
lecture. And the content of the assessment is the content from the inservice materials.
When taking Kane’s “argument-based approach to validity”, and using “Criterion 1:
Clarity of the Argument” the inservice lecture and test is based directly on the newest evidenced
based points that comprise a better understanding of content of the Transforming Care At the
Bedside initiative and the National Patient Safety Goals set by The Joint Commission. These
points of evidence lay the foundation for understanding patient safety and quality improvement
initiatives that are occurring in American hospitals. The inservice was conducted as a way to
spread the latest evidenced based information and increase the nurse’s base
knowledge.According to Criterion 2: Coherence of the Argument, by assuring transmitted
evidenced based information that is relevant to a nurse’s work, the test is a way to measure the
transmission of the information.According to Criterion 3: Plausibility of the assumptions, it is very
10. Page 10
plausible that the test is valid because the test questions are exactly quoted from the lecture
and power point slides when test questions are true. When the test question is false the
statement is changed in a simple way to make it false.Other sources of “error,”and other
sources of unwanted variance that might undermine the measurement characteristics of this
assessment are various things. I’ve listed nine examples of possible sources of error. 1. The
test taker not being present during the inservice would undermine the results of the test. 2. The
questions must be phrased in a clear non-confusing way. 3. There could be and was an
attendee who was not a nurse but an ancillary staff member that wanted to attend and take the
test. I did not include them in the reliability and item analysis calculations. 4. There are other
factors such as learning or reading disabilities that any of the participants may possess that may
interfere with test taking ability. 5. There could have been a distractor that caused the test
participant to accidentally mark an answer they did not intend. 6. The test was given through
Redcap, scoring was precise and was completed using SPSS. 7. Some of the nurses may
have already known the information and to the degree that the inservice was unnecessary. 8.
While this patient safety inservice is not given to improve patient safety directly, it is given to
improve the nurse’s motivation and involvement in unit and patient safety awareness. 9. It is
possible that those who scored poorly had already met their inservice time requirement and did
not take the test seriously. There are numerous other possible sources of error (Kane 1992).
Improvement Plan
1.
The first way to ensure that knowledge is being gained is to use this test as the pre-test.
I could assign this test before giving the class to assess the baseline knowledge.
2.
One aspect that could be improved upon is content validity. I could approach this by
having a few nurse colleagues assess the test for content as well as question writing
(Miller & Linn 2000).
3.
Next I could repeat this test and measure test item intercorrelations. I could re-conduct
this inservice on another floor with a separate and new cohort and see if my data is
different and in what way.
11. Page 11
References
1. Viney MM, Batcheller JM, Houston S, Belcik KB. Transforming Care at the Bedside:
Designing New Care Systems in an Age of Complexity. Journal of Nursing Care Quality
April. 2006;21(2):143–150.
2. Rutherford P, Moen R, Taylor J. TCAB: The “How” and the “What.” AJN, American
Journal of Nursing. 2009;109:5–17.
3. Kane MT. An argument-based approach to validity.Psychological Bulletin.
1992;112(3):527–535.
4. Moore Jr DE. Achieving desired results and improved outcomes: integrating planning
and assessment throughout learning activities.J CONTIN EDUC HEALTH PROF.
2009;29(1):1.
5. Miller DM, & Linn RL.Validation of performance-based assessments.APPL PSYCH
MEAS. 2000;24(4):367.