Statistics

1,704 views

Published on

all about statistics

Published in: Education
0 Comments
4 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
1,704
On SlideShare
0
From Embeds
0
Number of Embeds
460
Actions
Shares
0
Downloads
41
Comments
0
Likes
4
Embeds 0
No embeds

No notes for slide

Statistics

  1. 1. Statistics
  2. 2. Reference interval <ul><li>A 95% reference interval is an internal which contains the central 95% of the population. </li></ul><ul><li>For normally distributed data, this is the mean +/- 2SD. </li></ul><ul><li>May be calculated as two standard deviations on either side of the mean. </li></ul><ul><li>May be calculated directly from the frequency distribution. </li></ul>
  3. 3. Summarizing data “Average” <ul><li>Mean (average) </li></ul><ul><li>Median (middle observation) </li></ul><ul><li>Mode (most frequent value) </li></ul>
  4. 4. Sensitivity <ul><li>Sensitivity = proportion of people with disease who test positive . </li></ul><ul><li>Probability of a positive test result among those with the disease . </li></ul><ul><li>P(T+/D+) </li></ul><ul><li>SNout - A highly Sensitive test, if Negative, will Rule Out a disease. </li></ul><ul><li>Therefore useful for screening . </li></ul>
  5. 5. Specificity <ul><li>Specificity = proportion of people without the disease who test negative . </li></ul><ul><li>Probability of a negative test result among those without the disease . </li></ul><ul><li>P(T-/D-) </li></ul><ul><li>SPIn - A highly Specific test if Positive , will Rule In a disease. </li></ul><ul><li>Therefore useful for diagnostic test. </li></ul>
  6. 6. Prevalence <ul><li>Prevalence refers to the number of individuals with a given disease at a given point in time divided by the population at risk at that point in time. </li></ul><ul><li>Sensitivity and specificity does NOT change with different prevalences. </li></ul><ul><li>High prevalence causes higher PPV . </li></ul><ul><li>Smaller prevalence causes higher NPV. </li></ul>
  7. 7. PPV and NPV <ul><li>PPV= Probability of having the target condition given a positive test result. </li></ul><ul><li>NPV= Probability of not having the target condition given a negative test result. </li></ul>
  8. 8. Sensitivity = TP/(TP+FN) Specificity = TN/(TN+FP) PPV = TP/(TP+FP) NPV = TN /(FN+TN) Disease No Disease Test Positive TP FP Test Negative FN TN
  9. 9. Example <ul><li>A test has 90% sensitivity, 80% specificity and 1% prevalence. </li></ul><ul><li>Calculate the PPV and NPV. </li></ul><ul><li>Steps: </li></ul><ul><li>Assume a population of 10,000. </li></ul><ul><li>Create table. </li></ul>
  10. 10. <ul><li>Prevalence of 1% = 0.01 x 10,000= 100 </li></ul><ul><li>A sensitivity of 90% = 90/100 who test positive will have the disease. </li></ul><ul><li>A specificity of 80%= 80% who test negative will not have the disease = 80% of 9900 =7920 </li></ul>Disease No Disease Test positive 90 1980 2070 Test Negative 10 7920 7930 100 9900 10,000
  11. 11. Answer <ul><li>PPV= 90/2070 = 0.04= 4%. </li></ul><ul><li>NPV= 7920/7930 = 0.99= 99% </li></ul>
  12. 12. Likelihood ratio <ul><li>Positive likelihood ratio is the amount of certainty gained after a positive test result. </li></ul><ul><li>PLR = Sensitivity/(1- Specificity). </li></ul><ul><li>Negative likelihood ratio is the amount of certainty gained after a negative test result. </li></ul><ul><li>NLR = (1-Sensitivity)/Specificity. </li></ul>
  13. 13. <ul><li>Pre-test odds = pre-test probability/ (1-pre-test probability) </li></ul><ul><li>= Prevalence/(1-Prevalence) </li></ul><ul><li>Post-test odds = pre-test odds x LR </li></ul><ul><li>Post-test probability = post-test odds/(post-test odds +1) </li></ul>
  14. 14. Example <ul><li>A new D-dimer assay has a sensitivy for DVT of 95% and a specificity of 50%. </li></ul><ul><li>It is proposed to use it to screen a group of passengers who have a pretest DVT probability of 1%. </li></ul><ul><li>What is the post-test probability of DVT in an individual with a positive D-dimer result? </li></ul>
  15. 15. Answer <ul><li>Positive Likelihood Ratio = sensitivity/(1- specificity) </li></ul><ul><li>= 95/50= 1.9. </li></ul><ul><li>Pre test odds = 0.01/1-0.01=0.0101. </li></ul><ul><li>Post test odds = 0.0101 x 1.9 = 0.0192. </li></ul><ul><li>Post test probability = 0.0192/1.0192 = 0.0188 or 1.88%. </li></ul>
  16. 16. SD and SE <ul><li>Standard Deviation measures the variability of data around the mean. It provides information on how much variability can be expected among individuals within a population. </li></ul><ul><li>Standard Error describes how much variability can be expected when measuring the mean from several different samples. </li></ul>
  17. 17. Standard Error <ul><li>Is a measure of how far the sample mean is likely to be from the population mean. </li></ul><ul><li>Decreases with increasing sample size. </li></ul>
  18. 18. <ul><li>SE= SD </li></ul>Sample size
  19. 19. If study number increases… <ul><li>SE decreases </li></ul><ul><li>Confidence interval width decreases </li></ul><ul><li>Precision of estimate increases </li></ul><ul><li>“ The larger the sample, the more confident you are that the mean obtain is closer to the true mean.” </li></ul>
  20. 20. P- Value <ul><li>Probability Value </li></ul><ul><li>The probability of observing a result at least this magnitude due to chance. (assuming that the null hypothesis is true) </li></ul><ul><li>A p< 0.05 means that the likelihood of a difference being due to chance is less than 1 in 20. </li></ul><ul><li>By convention, p<0.05 is accepted as statistically significant. </li></ul><ul><li>However, even very small P values do not indicate that the difference is clinically important (ie: they do not indicate the magnitude) </li></ul><ul><li>Reject null hypothesis if p value <0.05. </li></ul><ul><li>“ If the P Value is low. The null hypothesis must go” </li></ul>
  21. 21. <ul><li>Null hypothesis,H 0 = no true difference </li></ul><ul><li>Alternative hypotehsis, H A = True difference. </li></ul>
  22. 22. Errors <ul><li>A type I error (also known as alpha ) is the probability of incorrectly concluding that there is a statistically significant difference in a dataset. Alpha is the number after a p-value. </li></ul><ul><li>Thus, a statistically significant difference reported as p<0.05 means that there is less than a 5 percent chance that the difference could have occurred by chance. </li></ul><ul><li>A type II error (also known as beta ) is the probability of incorrectly concluding that there was no statistically significant difference in a dataset. This error often reflects insufficient power of the study. </li></ul>
  23. 23. Power <ul><li>The term &quot;power&quot; (calculated as 1 - beta) refers to the ability of a study to detect a true difference. </li></ul><ul><li>Negative findings in a study may reflect that the study was underpowered to detect a difference. </li></ul><ul><li>Power increases with the increase in sample size . </li></ul>
  24. 24. Relative Risk and Absolute Risk <ul><li>RR  = The relative risk (or risk ratio) equals the incidence in exposed (T) individuals divided by the incidence in unexposed (C) individuals . </li></ul><ul><li>I T /I C </li></ul><ul><li>AR = Absolute Risk= atributible risk= risk difference </li></ul><ul><li>It reflects the additional incidence of disease related to an exposure taking into account the background rate of the disease. </li></ul><ul><li>I T - I C </li></ul>
  25. 25. Odds ratio <ul><li>The odds ratio equals the odds that an individual with a specific condition has been exposed to a risk factor divided by the odds that a control has been exposed. </li></ul><ul><li>The odds ratio provides a reasonable estimate of the relative risk for uncommon conditions. </li></ul><ul><li>The relative risk and odds ratio are interpreted relative to the number one. </li></ul><ul><li>An odds ratio of 0.6 , suggests that patients exposed to a variable of interest were 40 percent less likely to develop a specific outcome compared to the control group. </li></ul><ul><li>Similarly, an odds ratio of 1.5 suggests that the risk was increased by 50 percent. </li></ul>
  26. 26. Example RR= I T /I C = 0.75= 25 % reduction in death. AR= I T -I C = 2 per 1000 (2/1000) Need to treat 1000 people to prevent 2 more deaths than control treatment. Number Deaths Cumulative incidence Treatment 1000 6 I T = 6/1000 Control 1000 8 I C = 8/1000
  27. 27. Example Odds ratio= ad/bc RR= a(b+d)/b(a+c) Disease No Disease Positive test a c a + c Negative test b d b + d a+b c + d N
  28. 28. NNT <ul><li>NNT is the reciprocal of the absolute risk reduction. </li></ul><ul><li>NNT = 1/AR </li></ul><ul><li>In this case, 1/AR= 1000/2 = 500. </li></ul><ul><li>Need to treat 500 people to prevent one extra death. </li></ul>
  29. 29. Selection bias <ul><li>Occurs when the study subjects are not representative of the target population about which the conclusions are drawn. </li></ul>
  30. 30. Confounding <ul><li>Results when the effect of an exposure on the disease is distorted because of the association of exposure with other factors that influence the outcome under study. </li></ul>

×