This document provides multiple choice questions and explanations about key concepts in evidence-based practice and study designs. It discusses the hierarchy of evidence with randomized controlled trials having the lowest risk of bias. Cross-sectional studies are best for measuring prevalence, while cohort studies can measure incidence and be used prospectively or retrospectively. Cohort studies are considered the gold standard for analytical epidemiology. Randomized controlled trials are the strongest design for assessing efficacy of interventions. Critical appraisal of randomized trials considers randomization, blinding, precision of results, benefits vs harms, and applicability to patients. The receiver operating characteristic is used to evaluate diagnostic tests with dichotomous results and determine optimal cut-off values.
Published Research, Flawed, Misleading, Nefarious - Use of Reporting Guidelin...John Hoey
Much published health sciences literature is misleading and biased
Efforts to correct this include use of reporting guidelines- criteria for doing science and reporting the results properly
Also discussion of conflicts of interest - how to report them.
Published Research, Flawed, Misleading, Nefarious - Use of Reporting Guidelin...John Hoey
Much published health sciences literature is misleading and biased
Efforts to correct this include use of reporting guidelines- criteria for doing science and reporting the results properly
Also discussion of conflicts of interest - how to report them.
Epidemiological Approaches for Evaluation of diagnostic tests.pptxBhoj Raj Singh
Diagnosis of a disease or a problem is the first step towards solution/ treatment. Clinical Diagnosis or Provisional Diagnosis is the first step in diagnosis and is done after a physical examination of the patient by a clinician. Clinical diagnosis may or may not be true and to reach Final diagnosis Laboratory Investigations using gross and microscopic pathological observations and determining the disease indicators are required. The diagnostic tests may be Non-dichotomous Diagnostic Tests (when continuous values are given by the test in a range starting from sub-normal to above-normal range) and Dichotomous Diagnostic Tests (when results are given either plus or minus, disease or no-disease). To make non- Dichotomous diagnostic test a Dichotomous one you need to establish the cut-off values based on reference values or Gold Standard test readings or with the use of Receiver operator characteristic (ROC) curves, Precision-Recall Curves, Likelihood Ratios, etc., and finally establishing statistical agreement (using Kappa values, Level of Agreement, χ2 Statistics) between the true diagnosis and laboratory diagnosis. Thereafter, the Accuracy, Precision, Bias, Sensitivity, Specificity, Positive Predictive value, and Negative Predictive value, of a diagnostic test are established for use in clinical practice. Diagnostic tests are also used to determine Prevalence (True prevalence, apparent prevalence) and Incidence of the disease to estimate the disease burden so that control measures can be implemented. There are several Phases in the development and use of a diagnostic assay starting from conceptualization of the diagnostic test, development and evaluation to determine flaws in diagnostic test use and Interpretation influencers. This presentation mainly deals with the epidemiological evaluation procedures for diagnostic tests.
Validity and reliability expressesions means as to how measurements and diagnostic approaches can more efficiently and maintaning the accuracy with many repeated tests. In validity we basically speak of specificity and sensitivity of tests, which can be affected by prevalence.
Journal club - Disease progression in hemodynamically stable patients present...Farooq Khan
Critical appraisal of:
Glickman SW et al. Disease Progression in Hemodynamically Stable Patients Presenting to the Emergency Department With Sepsis. Acad Emerg Med. 2010 17:383-90
Interactive quiz on early goal-directed therapy, surviving sepsis guidelines and EBM topic of prognosis studies.
Screening for diseases from community medicine. It explains the definition of screening, lead time, uses of screening, differences between screening and diagnostic test, criteria for a disease to be screened and criteria for a screening test, cut-off points, etc
This presentation has been prepared to highlight the most important points about screening.
It builds on previous -even little-knowledge about screening in biomedical sciences.
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...John Andrews
SlideShare Description for "Chatty Kathy - UNC Bootcamp Final Project Presentation"
Title: Chatty Kathy: Enhancing Physical Activity Among Older Adults
Description:
Discover how Chatty Kathy, an innovative project developed at the UNC Bootcamp, aims to tackle the challenge of low physical activity among older adults. Our AI-driven solution uses peer interaction to boost and sustain exercise levels, significantly improving health outcomes. This presentation covers our problem statement, the rationale behind Chatty Kathy, synthetic data and persona creation, model performance metrics, a visual demonstration of the project, and potential future developments. Join us for an insightful Q&A session to explore the potential of this groundbreaking project.
Project Team: Jay Requarth, Jana Avery, John Andrews, Dr. Dick Davis II, Nee Buntoum, Nam Yeongjin & Mat Nicholas
Epidemiological Approaches for Evaluation of diagnostic tests.pptxBhoj Raj Singh
Diagnosis of a disease or a problem is the first step towards solution/ treatment. Clinical Diagnosis or Provisional Diagnosis is the first step in diagnosis and is done after a physical examination of the patient by a clinician. Clinical diagnosis may or may not be true and to reach Final diagnosis Laboratory Investigations using gross and microscopic pathological observations and determining the disease indicators are required. The diagnostic tests may be Non-dichotomous Diagnostic Tests (when continuous values are given by the test in a range starting from sub-normal to above-normal range) and Dichotomous Diagnostic Tests (when results are given either plus or minus, disease or no-disease). To make non- Dichotomous diagnostic test a Dichotomous one you need to establish the cut-off values based on reference values or Gold Standard test readings or with the use of Receiver operator characteristic (ROC) curves, Precision-Recall Curves, Likelihood Ratios, etc., and finally establishing statistical agreement (using Kappa values, Level of Agreement, χ2 Statistics) between the true diagnosis and laboratory diagnosis. Thereafter, the Accuracy, Precision, Bias, Sensitivity, Specificity, Positive Predictive value, and Negative Predictive value, of a diagnostic test are established for use in clinical practice. Diagnostic tests are also used to determine Prevalence (True prevalence, apparent prevalence) and Incidence of the disease to estimate the disease burden so that control measures can be implemented. There are several Phases in the development and use of a diagnostic assay starting from conceptualization of the diagnostic test, development and evaluation to determine flaws in diagnostic test use and Interpretation influencers. This presentation mainly deals with the epidemiological evaluation procedures for diagnostic tests.
Validity and reliability expressesions means as to how measurements and diagnostic approaches can more efficiently and maintaning the accuracy with many repeated tests. In validity we basically speak of specificity and sensitivity of tests, which can be affected by prevalence.
Journal club - Disease progression in hemodynamically stable patients present...Farooq Khan
Critical appraisal of:
Glickman SW et al. Disease Progression in Hemodynamically Stable Patients Presenting to the Emergency Department With Sepsis. Acad Emerg Med. 2010 17:383-90
Interactive quiz on early goal-directed therapy, surviving sepsis guidelines and EBM topic of prognosis studies.
Screening for diseases from community medicine. It explains the definition of screening, lead time, uses of screening, differences between screening and diagnostic test, criteria for a disease to be screened and criteria for a screening test, cut-off points, etc
This presentation has been prepared to highlight the most important points about screening.
It builds on previous -even little-knowledge about screening in biomedical sciences.
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...John Andrews
SlideShare Description for "Chatty Kathy - UNC Bootcamp Final Project Presentation"
Title: Chatty Kathy: Enhancing Physical Activity Among Older Adults
Description:
Discover how Chatty Kathy, an innovative project developed at the UNC Bootcamp, aims to tackle the challenge of low physical activity among older adults. Our AI-driven solution uses peer interaction to boost and sustain exercise levels, significantly improving health outcomes. This presentation covers our problem statement, the rationale behind Chatty Kathy, synthetic data and persona creation, model performance metrics, a visual demonstration of the project, and potential future developments. Join us for an insightful Q&A session to explore the potential of this groundbreaking project.
Project Team: Jay Requarth, Jana Avery, John Andrews, Dr. Dick Davis II, Nee Buntoum, Nam Yeongjin & Mat Nicholas
Opendatabay - Open Data Marketplace.pptxOpendatabay
Opendatabay.com unlocks the power of data for everyone. Open Data Marketplace fosters a collaborative hub for data enthusiasts to explore, share, and contribute to a vast collection of datasets.
First ever open hub for data enthusiasts to collaborate and innovate. A platform to explore, share, and contribute to a vast collection of datasets. Through robust quality control and innovative technologies like blockchain verification, opendatabay ensures the authenticity and reliability of datasets, empowering users to make data-driven decisions with confidence. Leverage cutting-edge AI technologies to enhance the data exploration, analysis, and discovery experience.
From intelligent search and recommendations to automated data productisation and quotation, Opendatabay AI-driven features streamline the data workflow. Finding the data you need shouldn't be a complex. Opendatabay simplifies the data acquisition process with an intuitive interface and robust search tools. Effortlessly explore, discover, and access the data you need, allowing you to focus on extracting valuable insights. Opendatabay breaks new ground with a dedicated, AI-generated, synthetic datasets.
Leverage these privacy-preserving datasets for training and testing AI models without compromising sensitive information. Opendatabay prioritizes transparency by providing detailed metadata, provenance information, and usage guidelines for each dataset, ensuring users have a comprehensive understanding of the data they're working with. By leveraging a powerful combination of distributed ledger technology and rigorous third-party audits Opendatabay ensures the authenticity and reliability of every dataset. Security is at the core of Opendatabay. Marketplace implements stringent security measures, including encryption, access controls, and regular vulnerability assessments, to safeguard your data and protect your privacy.
Show drafts
volume_up
Empowering the Data Analytics Ecosystem: A Laser Focus on Value
The data analytics ecosystem thrives when every component functions at its peak, unlocking the true potential of data. Here's a laser focus on key areas for an empowered ecosystem:
1. Democratize Access, Not Data:
Granular Access Controls: Provide users with self-service tools tailored to their specific needs, preventing data overload and misuse.
Data Catalogs: Implement robust data catalogs for easy discovery and understanding of available data sources.
2. Foster Collaboration with Clear Roles:
Data Mesh Architecture: Break down data silos by creating a distributed data ownership model with clear ownership and responsibilities.
Collaborative Workspaces: Utilize interactive platforms where data scientists, analysts, and domain experts can work seamlessly together.
3. Leverage Advanced Analytics Strategically:
AI-powered Automation: Automate repetitive tasks like data cleaning and feature engineering, freeing up data talent for higher-level analysis.
Right-Tool Selection: Strategically choose the most effective advanced analytics techniques (e.g., AI, ML) based on specific business problems.
4. Prioritize Data Quality with Automation:
Automated Data Validation: Implement automated data quality checks to identify and rectify errors at the source, minimizing downstream issues.
Data Lineage Tracking: Track the flow of data throughout the ecosystem, ensuring transparency and facilitating root cause analysis for errors.
5. Cultivate a Data-Driven Mindset:
Metrics-Driven Performance Management: Align KPIs and performance metrics with data-driven insights to ensure actionable decision making.
Data Storytelling Workshops: Equip stakeholders with the skills to translate complex data findings into compelling narratives that drive action.
Benefits of a Precise Ecosystem:
Sharpened Focus: Precise access and clear roles ensure everyone works with the most relevant data, maximizing efficiency.
Actionable Insights: Strategic analytics and automated quality checks lead to more reliable and actionable data insights.
Continuous Improvement: Data-driven performance management fosters a culture of learning and continuous improvement.
Sustainable Growth: Empowered by data, organizations can make informed decisions to drive sustainable growth and innovation.
By focusing on these precise actions, organizations can create an empowered data analytics ecosystem that delivers real value by driving data-driven decisions and maximizing the return on their data investment.
Techniques to optimize the pagerank algorithm usually fall in two categories. One is to try reducing the work per iteration, and the other is to try reducing the number of iterations. These goals are often at odds with one another. Skipping computation on vertices which have already converged has the potential to save iteration time. Skipping in-identical vertices, with the same in-links, helps reduce duplicate computations and thus could help reduce iteration time. Road networks often have chains which can be short-circuited before pagerank computation to improve performance. Final ranks of chain nodes can be easily calculated. This could reduce both the iteration time, and the number of iterations. If a graph has no dangling nodes, pagerank of each strongly connected component can be computed in topological order. This could help reduce the iteration time, no. of iterations, and also enable multi-iteration concurrency in pagerank computation. The combination of all of the above methods is the STICD algorithm. [sticd] For dynamic graphs, unchanged components whose ranks are unaffected can be skipped altogether.
As Europe's leading economic powerhouse and the fourth-largest hashtag#economy globally, Germany stands at the forefront of innovation and industrial might. Renowned for its precision engineering and high-tech sectors, Germany's economic structure is heavily supported by a robust service industry, accounting for approximately 68% of its GDP. This economic clout and strategic geopolitical stance position Germany as a focal point in the global cyber threat landscape.
In the face of escalating global tensions, particularly those emanating from geopolitical disputes with nations like hashtag#Russia and hashtag#China, hashtag#Germany has witnessed a significant uptick in targeted cyber operations. Our analysis indicates a marked increase in hashtag#cyberattack sophistication aimed at critical infrastructure and key industrial sectors. These attacks range from ransomware campaigns to hashtag#AdvancedPersistentThreats (hashtag#APTs), threatening national security and business integrity.
🔑 Key findings include:
🔍 Increased frequency and complexity of cyber threats.
🔍 Escalation of state-sponsored and criminally motivated cyber operations.
🔍 Active dark web exchanges of malicious tools and tactics.
Our comprehensive report delves into these challenges, using a blend of open-source and proprietary data collection techniques. By monitoring activity on critical networks and analyzing attack patterns, our team provides a detailed overview of the threats facing German entities.
This report aims to equip stakeholders across public and private sectors with the knowledge to enhance their defensive strategies, reduce exposure to cyber risks, and reinforce Germany's resilience against cyber threats.
2. Which one of the following studies has the highest
risk of bias?
A- Case report/series
B- Cross-sectional study
C- Case- control study
D- Cohort study
E- RCT
3. McGovern D, Summerskill W, Valori R, Levi M. Key topics in EBM.
BIOS Scientific Publishers, 1st Edition, Oxford, 2001.
Evidence pyramid
Decrease in
bias risk
Increase in
evidence level
4. What is the best design you choose to study the
prevalence of a disease?
A- Ecologic study
B- Cross sectional study
C- Case- control study
D- Cohort study
E- RCT
6. What is the best trial design to study the incidence of
a disease?
A- Ecologic study
B- Cross-sectional study
C- Case-control study
D- Cohort study
E- RCT
7. Cohort study
investigate etiology or outcome of disease
Prospectively over a period of time (years or decades)
Can be retrospective if clear point of 1st exposure
2 groups well matched to avoid confounding factors
8. Which of the following studies is considered a gold
standard for analytical epidemiology?
A- Ecologic study
B- Cross-sectional study
C- Case-control study
D- Cohort study
E- RCT
12. You want to assess the efficacy of a new anti-epileptic
drug versus an old drug? What is the best design you
choose for this purpose?
A- Ecologic study
B- Cross sectional study
C- Case-control study
D- Cohort study
E- RCT
13. • Quantitative studies (quantified outcomes)
• Most rigorous method of hypothesis testing
• Experimental studies versus observational studies
• Gold standard to evaluate effectiveness of interventions
RCTs are regarded as
Basics of RCT – 3
Jadad AR, Enkin MW. Randomized control trials.
Blackwell Publishing, 2nd ed, 2007.
14. Basic structure of a RCT
Akobeng AK. Arch Dis Child 2005 ; 90 : 840 – 844.
Parallel trial is the most frequently used design
15. Question type & study design
Study Design
Question
Intervention RCT
Incidence & prognosis Cohort study
Prevalence Cross-sectional study
Etiology & risk factors Cohort or case-control
Diagnosis Cross-sectional study
In each case, SR of all available studies better than individual study
16. You read in a paper that a p value is 0.01. Is this
result clinically significant?
A- Yes
B- No
C- Cannot tell
17. • p > 0.05 Statistically insignificant
• p < 0.05 Statistically significant
Probability value (p value)
statistically
significant
clinically
significant
doesn't
mean
18. An open label randomized controlled trial means:
A- Everyone participating in the trial is aware of
assigned treatment
B- Patients are ignorant of assigned treatment
C- Investigators are ignorant of assigned treatment
D- Patients, investigators and data evaluators are
ignorant of assigned treatment
19. Blinding or masking
Depending on blinding extent, RCTs classified as
• Open label Everyone aware
• Single-blind Only patients or investigators ignorant
• Double-blind Patients & investigators ignorant
• Triple-blind Patients, investigators & data evaluators
ignorant
20. A critical appraisal of a RCT takes into consideration
one of the followings:
A- Randomization
B- Blinding
C- Precision of the estimate (CI)
D- Benefice versus harm
E- All of the above
21. Internal & external validity of a RCT
Attia J & Page J. Evid Based Med 2001 ; 6 : 68 - 69.
22. Appraising a RCT (checklist) – 1
Are the results valid?
During trial Was trial blinded & to what extent?
At end of trial Was follow-up complete?
Was ITT principle applied?
Was the trial stopped early?
Guyatt G, et al. User’s guide to the medical literature.
Essentials of evidence based clinical practice. Mc Graw Hill, 2nd ed, 2008.
Were the patients randomized?
Was the randomization concealed?
Similar prognostic factors in 2 groups?
At start of trial
23. What are the results?
8- How large was the treatment effect?
9- How precise was estimate of treatment effect (CI)?
How can I apply the results to patient care?
10- Were the study patients similar to my patient?
11- Were all patient-important outcomes considered?
12- Are the likely treatment benefits worth harm & cost?
Guyatt G, et al. User’s guide to the medical literature.
Essentials of evidence based clinical practice. Mc Graw Hill, 2nd ed, 2008.
Appraising a RCT (checklist) – 2
24. External validity
Applicability of results to your patients
Issues needed to consider before deciding to
incorporate research evidence into clinical practice
* Guyatt G, et al. User’s guide to the medical literature.
Essentials of evidence based clinical practice. Mc Graw Hill, 2nd edition, 2008.
• Similarity of study population to your population
• Benefit versus harm
• Patients preferences
• Availability
• Costs
26. Benefit versus harm
“All that glisters is not gold”
W. Shakespeare
In “The Merchant of Venice”
Furberg BD & Furberg CD. Evaluating clinical research.
Springer Science & Business Media – 1st Edition – New York – 2007.
27. The receiver operating characteristic is used to report:
A- Incidence of a disease
B- Prevalence of a disease
C- Prognosis of a disease
D- Diagnostic test with 2 results (yes/no)
E- Diagnostic test with more than 2 results
28. Accuracy of tests & number of results
• Dichotomous test (only 2 results)
Sensibility & Specificity
PPV & NPV
Likelihod ratio + & –
Diagnostic OR
• Multilevel test (> 2 results)
Receiver Operating Characteristic (ROC)
Make continuous test dichotomous: fixed cut-off value
with 95% CI
29. Which of the followings is used to know the cut-off
values of a diagnostic accuracy test (disease positive
versus disease negative):
A- Positive predictive value
B- Negative predictive value
C- Likelihood ratio
D- Receiver operating characteristic
30. Useful properties of ROC curve
AUC provides an overall measure of a test’s accuracy
Accuracy of binary diagnostic test for a cut-point value
Determination of cut-off point to distinguish D + & D –
Comparison of different tests for dg of a target disorder
31. Area under the ROC curve in IDA
If we select 2 patients at random one with IDA & one without
Probability is 0.91 that patient with IDA will have abnormal ferritin
32. Accuracy of diagnostic test using AUC of ROC
Value Accuracy
0.90 - 1.00 Excellent
0.80 - 0.90 Good
0.70 - 0.80 Fair
0.60 - 0.70 Poor
Pines JM & Everett WW. Evidence-Based emergency care: diagnostic testing & clinical decision
rules. Blackwell’s publishing – West Sussex – UK – 2008.
The higher AUC the better the overall performance of the test
34. Peat JK. Health science research. Allen & Unwin, Australia, 1st edition, 2001.
Cut-off point discriminates between
subjects with or without disease
Indicated by the point on curve that
is far away from chance diagonal
Determination of cut-off point
to distinguish D + & D –
35. Diagnosis of IDA AUC of the ROC
Seurm ferritin 0.91
Transferrin saturation 0.79
MCV 0.78
RCP 0.72
* RCP: Red Cell Protoporphyrin
Guyatt GH et al. J Gen Intern Med 1992 ; 7 : 145 – 153.
Comparing different tests for target disorder