Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our User Agreement and Privacy Policy.

Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our Privacy Policy and User Agreement for details.

Like this presentation? Why not share!

- Statistics three by Mohamed Hefny 299 views
- Errors in Statistical Survey by ThankGod Damion Okpe 201 views
- Statistics four by Mohamed Hefny 242 views
- Common Errors in Statistical Thinking by aprofitt 626 views
- Propteties of Standard Deviation by Sahil Jindal 1848 views
- What does an odds ratio or relative... by Terry Shaneyfelt 3060 views

No Downloads

Total views

1,704

On SlideShare

0

From Embeds

0

Number of Embeds

460

Shares

0

Downloads

41

Comments

0

Likes

4

No embeds

No notes for slide

- 1. Statistics
- 2. Reference interval <ul><li>A 95% reference interval is an internal which contains the central 95% of the population. </li></ul><ul><li>For normally distributed data, this is the mean +/- 2SD. </li></ul><ul><li>May be calculated as two standard deviations on either side of the mean. </li></ul><ul><li>May be calculated directly from the frequency distribution. </li></ul>
- 3. Summarizing data “Average” <ul><li>Mean (average) </li></ul><ul><li>Median (middle observation) </li></ul><ul><li>Mode (most frequent value) </li></ul>
- 4. Sensitivity <ul><li>Sensitivity = proportion of people with disease who test positive . </li></ul><ul><li>Probability of a positive test result among those with the disease . </li></ul><ul><li>P(T+/D+) </li></ul><ul><li>SNout - A highly Sensitive test, if Negative, will Rule Out a disease. </li></ul><ul><li>Therefore useful for screening . </li></ul>
- 5. Specificity <ul><li>Specificity = proportion of people without the disease who test negative . </li></ul><ul><li>Probability of a negative test result among those without the disease . </li></ul><ul><li>P(T-/D-) </li></ul><ul><li>SPIn - A highly Specific test if Positive , will Rule In a disease. </li></ul><ul><li>Therefore useful for diagnostic test. </li></ul>
- 6. Prevalence <ul><li>Prevalence refers to the number of individuals with a given disease at a given point in time divided by the population at risk at that point in time. </li></ul><ul><li>Sensitivity and specificity does NOT change with different prevalences. </li></ul><ul><li>High prevalence causes higher PPV . </li></ul><ul><li>Smaller prevalence causes higher NPV. </li></ul>
- 7. PPV and NPV <ul><li>PPV= Probability of having the target condition given a positive test result. </li></ul><ul><li>NPV= Probability of not having the target condition given a negative test result. </li></ul>
- 8. Sensitivity = TP/(TP+FN) Specificity = TN/(TN+FP) PPV = TP/(TP+FP) NPV = TN /(FN+TN) Disease No Disease Test Positive TP FP Test Negative FN TN
- 9. Example <ul><li>A test has 90% sensitivity, 80% specificity and 1% prevalence. </li></ul><ul><li>Calculate the PPV and NPV. </li></ul><ul><li>Steps: </li></ul><ul><li>Assume a population of 10,000. </li></ul><ul><li>Create table. </li></ul>
- 10. <ul><li>Prevalence of 1% = 0.01 x 10,000= 100 </li></ul><ul><li>A sensitivity of 90% = 90/100 who test positive will have the disease. </li></ul><ul><li>A specificity of 80%= 80% who test negative will not have the disease = 80% of 9900 =7920 </li></ul>Disease No Disease Test positive 90 1980 2070 Test Negative 10 7920 7930 100 9900 10,000
- 11. Answer <ul><li>PPV= 90/2070 = 0.04= 4%. </li></ul><ul><li>NPV= 7920/7930 = 0.99= 99% </li></ul>
- 12. Likelihood ratio <ul><li>Positive likelihood ratio is the amount of certainty gained after a positive test result. </li></ul><ul><li>PLR = Sensitivity/(1- Specificity). </li></ul><ul><li>Negative likelihood ratio is the amount of certainty gained after a negative test result. </li></ul><ul><li>NLR = (1-Sensitivity)/Specificity. </li></ul>
- 13. <ul><li>Pre-test odds = pre-test probability/ (1-pre-test probability) </li></ul><ul><li>= Prevalence/(1-Prevalence) </li></ul><ul><li>Post-test odds = pre-test odds x LR </li></ul><ul><li>Post-test probability = post-test odds/(post-test odds +1) </li></ul>
- 14. Example <ul><li>A new D-dimer assay has a sensitivy for DVT of 95% and a specificity of 50%. </li></ul><ul><li>It is proposed to use it to screen a group of passengers who have a pretest DVT probability of 1%. </li></ul><ul><li>What is the post-test probability of DVT in an individual with a positive D-dimer result? </li></ul>
- 15. Answer <ul><li>Positive Likelihood Ratio = sensitivity/(1- specificity) </li></ul><ul><li>= 95/50= 1.9. </li></ul><ul><li>Pre test odds = 0.01/1-0.01=0.0101. </li></ul><ul><li>Post test odds = 0.0101 x 1.9 = 0.0192. </li></ul><ul><li>Post test probability = 0.0192/1.0192 = 0.0188 or 1.88%. </li></ul>
- 16. SD and SE <ul><li>Standard Deviation measures the variability of data around the mean. It provides information on how much variability can be expected among individuals within a population. </li></ul><ul><li>Standard Error describes how much variability can be expected when measuring the mean from several different samples. </li></ul>
- 17. Standard Error <ul><li>Is a measure of how far the sample mean is likely to be from the population mean. </li></ul><ul><li>Decreases with increasing sample size. </li></ul>
- 18. <ul><li>SE= SD </li></ul>Sample size
- 19. If study number increases… <ul><li>SE decreases </li></ul><ul><li>Confidence interval width decreases </li></ul><ul><li>Precision of estimate increases </li></ul><ul><li>“ The larger the sample, the more confident you are that the mean obtain is closer to the true mean.” </li></ul>
- 20. P- Value <ul><li>Probability Value </li></ul><ul><li>The probability of observing a result at least this magnitude due to chance. (assuming that the null hypothesis is true) </li></ul><ul><li>A p< 0.05 means that the likelihood of a difference being due to chance is less than 1 in 20. </li></ul><ul><li>By convention, p<0.05 is accepted as statistically significant. </li></ul><ul><li>However, even very small P values do not indicate that the difference is clinically important (ie: they do not indicate the magnitude) </li></ul><ul><li>Reject null hypothesis if p value <0.05. </li></ul><ul><li>“ If the P Value is low. The null hypothesis must go” </li></ul>
- 21. <ul><li>Null hypothesis,H 0 = no true difference </li></ul><ul><li>Alternative hypotehsis, H A = True difference. </li></ul>
- 22. Errors <ul><li>A type I error (also known as alpha ) is the probability of incorrectly concluding that there is a statistically significant difference in a dataset. Alpha is the number after a p-value. </li></ul><ul><li>Thus, a statistically significant difference reported as p<0.05 means that there is less than a 5 percent chance that the difference could have occurred by chance. </li></ul><ul><li>A type II error (also known as beta ) is the probability of incorrectly concluding that there was no statistically significant difference in a dataset. This error often reflects insufficient power of the study. </li></ul>
- 23. Power <ul><li>The term "power" (calculated as 1 - beta) refers to the ability of a study to detect a true difference. </li></ul><ul><li>Negative findings in a study may reflect that the study was underpowered to detect a difference. </li></ul><ul><li>Power increases with the increase in sample size . </li></ul>
- 24. Relative Risk and Absolute Risk <ul><li>RR = The relative risk (or risk ratio) equals the incidence in exposed (T) individuals divided by the incidence in unexposed (C) individuals . </li></ul><ul><li>I T /I C </li></ul><ul><li>AR = Absolute Risk= atributible risk= risk difference </li></ul><ul><li>It reflects the additional incidence of disease related to an exposure taking into account the background rate of the disease. </li></ul><ul><li>I T - I C </li></ul>
- 25. Odds ratio <ul><li>The odds ratio equals the odds that an individual with a specific condition has been exposed to a risk factor divided by the odds that a control has been exposed. </li></ul><ul><li>The odds ratio provides a reasonable estimate of the relative risk for uncommon conditions. </li></ul><ul><li>The relative risk and odds ratio are interpreted relative to the number one. </li></ul><ul><li>An odds ratio of 0.6 , suggests that patients exposed to a variable of interest were 40 percent less likely to develop a specific outcome compared to the control group. </li></ul><ul><li>Similarly, an odds ratio of 1.5 suggests that the risk was increased by 50 percent. </li></ul>
- 26. Example RR= I T /I C = 0.75= 25 % reduction in death. AR= I T -I C = 2 per 1000 (2/1000) Need to treat 1000 people to prevent 2 more deaths than control treatment. Number Deaths Cumulative incidence Treatment 1000 6 I T = 6/1000 Control 1000 8 I C = 8/1000
- 27. Example Odds ratio= ad/bc RR= a(b+d)/b(a+c) Disease No Disease Positive test a c a + c Negative test b d b + d a+b c + d N
- 28. NNT <ul><li>NNT is the reciprocal of the absolute risk reduction. </li></ul><ul><li>NNT = 1/AR </li></ul><ul><li>In this case, 1/AR= 1000/2 = 500. </li></ul><ul><li>Need to treat 500 people to prevent one extra death. </li></ul>
- 29. Selection bias <ul><li>Occurs when the study subjects are not representative of the target population about which the conclusions are drawn. </li></ul>
- 30. Confounding <ul><li>Results when the effect of an exposure on the disease is distorted because of the association of exposure with other factors that influence the outcome under study. </li></ul>

No public clipboards found for this slide

×
### Save the most important slides with Clipping

Clipping is a handy way to collect and organize the most important slides from a presentation. You can keep your great finds in clipboards organized around topics.

Be the first to comment