Statistics excellent

583 views
514 views

Published on

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
583
On SlideShare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
Downloads
51
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • 1 1 1 1
  • 3 3 3
  • 4 4 4 3
  • 8 8 8 7
  • 9 9 9
  • 10 10 10
  • 11 11 11 8
  • 12 12 12 9
  • 13 13 13
  • 14 14 14 10
  • 15 15 15 11
  • 23 22 22 20
  • 16 16 16 12
  • 24 23 23 18
  • 25 24 24 19
  • 17 17 17 13
  • 19 18 18 14
  • 27 26 26 22
  • 29 28 28 24
  • 30 29 29 25
  • 31 30 30 26
  • 32 31 31 27
  • 33 32 32 28
  • 34 33 33 29
  • 35 34 34 30
  • 36 35 35 31
  • 37 36 36 32
  • 38 37 37 33
  • 39 38 38 34
  • 38 37 37 33
  • 39 38 38 34
  • 40 39 39 35
  • 41 40 40 36
  • 40 39 39 35
  • 42 41 41 41
  • 43 42 42
  • 44 43 43
  • 45 44 44
  • 46 43 43
  • 47 45 45
  • 48 43 43
  • 49 46 46
  • 50 47 47 37
  • 52 49 49 39
  • 53
  • 54 50 50 40
  • 55 51 51 42
  • 57 53 53 43
  • 58 54 54 44
  • 59 55 55 45
  • 60 56 56 46
  • 61 57 57 47
  • 62 58 58 44
  • 63 59 59 48
  • 64 60 60 49
  • 65 61 61 50
  • 66 62 62 51
  • 67 63 63 52
  • 68 64 64 53
  • 69 65 65 54
  • 70 66 66 55
  • 71 67 67 56
  • 72 68 68 57
  • 73
  • 74 69 69 58
  • 75 70 70 59
  • 76 71 71 60
  • 77 72 72 61
  • 78 73 73 62
  • 79
  • 80 74 74 63
  • 81 75 75 64
  • 82 76 76 65
  • 83 77 77 66
  • 85 78 78 67
  • 86 79 79 68
  • 87 80 80 69
  • 88 81 81 70
  • 89 82 82
  • 92 85 85
  • 93 86 86
  • 94 87 87
  • 95 71 71 60
  • 96 88 88
  • 99 91 91
  • 100 92 92
  • 101 93 93
  • 104 96 96
  • 105 97 97
  • 106 98 98
  • 107 99 99
  • 108 100 100
  • 109 101 101
  • 110 102 102
  • 111 103 103
  • 112 104 104
  • 113 105 105 70
  • 114 106 106
  • 115 107 107
  • 116 108 108
  • 117 109 109
  • 118 110 110
  • 119 111 111
  • 120 112 112
  • 121 113 113
  • 122 114 114 70
  • 124 116 116
  • 125 117
  • 126 118
  • 127 119
  • 128 120
  • 129 121
  • 130 122
  • 131 123
  • 132 124
  • 133 125
  • 134 126
  • 135 127
  • 136 128
  • 137 129
  • 138 130 114 70
  • 140 132
  • 141 133
  • 142 134
  • 145 137
  • 146 138
  • 147 139
  • 148 140
  • 186
  • 149 141
  • 150 142
  • 151 143
  • 152
  • 153 144
  • 154 145
  • 155 146
  • 156 147
  • 157 148
  • 161 152
  • 162
  • 163
  • 164
  • 165
  • 166
  • 167
  • 168
  • 169
  • 170
  • 171
  • 171
  • 172
  • 173
  • 174
  • 171
  • 175
  • 176
  • 177
  • 178
  • 179
  • 180
  • 181
  • 182
  • 183
  • 184
  • 185
  • 187
  • 188
  • 189
  • 190
  • 191
  • 192
  • 193
  • 196
  • 197
  • 198
  • 199
  • 200
  • 201
  • 202
  • 204
  • 205
  • 206
  • 207
  • 208
  • 209
  • 210
  • 211
  • 212
  • 213
  • 221
  • 224
  • 226
  • 228
  • Statistics excellent

    1. 1. Practical Applications of Statistical Methods in the Clinical Laboratory Roger L. Bertholf, Ph.D., DABCC Associate Professor of Pathology Director of Clinical Chemistry & Toxicology UF Health Science Center/Jacksonville
    2. 2. “ [Statistics are] the only tools by which an opening can be cut through the formidable thicket of difficulties that bars the path of those who pursue the Science of Man.” [Sir] Francis Galton (1822-1911)
    3. 3. “ There are three kinds of lies: Lies, damned lies, and statistics” Benjamin Disraeli (1804-1881)
    4. 4. What are statistics, and what are they used for? <ul><li>Descriptive statistics are used to characterize data </li></ul><ul><li>Statistical analysis is used to distinguish between random and meaningful variations </li></ul><ul><li>In the laboratory, we use statistics to monitor and verify method performance, and interpret the results of clinical laboratory tests </li></ul>
    5. 5. “ Do not worry about your difficulties in mathematics, I assure you that mine are greater” Albert Einstein (1879-1955)
    6. 6. “ I don't believe in mathematics” Albert Einstein
    7. 7. Summation function
    8. 8. Product function
    9. 9. The Mean (average) <ul><li>The mean is a measure of the centrality of a set of data. </li></ul>
    10. 10. Mean (arithmetical)
    11. 11. Mean (geometric)
    12. 12. Use of the Geometric mean: <ul><li>The geometric mean is primarily used to average ratios or rates of change. </li></ul>
    13. 13. Mean (harmonic)
    14. 14. Example of the use of Harmonic mean: <ul><li>Suppose you spend $6 on pills costing 30 cents per dozen, and $6 on pills costing 20 cents per dozen. What was the average price of the pills you bought? </li></ul>
    15. 15. Example of the use of Harmonic mean: <ul><li>You spent $12 on 50 dozen pills, so the average cost is 12/50=0.24, or 24 cents. </li></ul><ul><li>This also happens to be the harmonic mean of 20 and 30: </li></ul>
    16. 16. Root mean square (RMS)
    17. 17. For the data set: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10:
    18. 18. The Weighted Mean
    19. 19. Other measures of centrality <ul><li>Mode </li></ul>
    20. 20. The Mode <ul><li>The mode is the value that occurs most often </li></ul>
    21. 21. Other measures of centrality <ul><li>Mode </li></ul><ul><li>Midrange </li></ul>
    22. 22. The Midrange <ul><li>The midrange is the mean of the highest and lowest values </li></ul>
    23. 23. Other measures of centrality <ul><li>Mode </li></ul><ul><li>Midrange </li></ul><ul><li>Median </li></ul>
    24. 24. The Median <ul><li>The median is the value for which half of the remaining values are above and half are below it. I.e. , in an ordered array of 15 values, the 8th value is the median. If the array has 16 values, the median is the mean of the 8th and 9th values. </li></ul>
    25. 25. Example of the use of median vs. mean: <ul><li>Suppose you’re thinking about building a house in a certain neighborhood, and the real estate agent tells you that the average (mean) size house in that area is 2,500 sq. ft. Astutely, you ask “What’s the median size?” The agent replies “1,800 sq. ft.” </li></ul><ul><li>What does this tell you about the sizes of the houses in the neighborhood? </li></ul>
    26. 26. Measuring variance <ul><li>Two sets of data may have similar means, but otherwise be very dissimilar. For example, males and females have similar baseline LH concentrations, but there is much wider variation in females. </li></ul><ul><li>How do we express quantitatively the amount of variation in a data set? </li></ul>
    27. 28. The Variance
    28. 29. The Variance <ul><li>The variance is the mean of the squared differences between individual data points and the mean of the array. </li></ul><ul><li>Or, after simplifying, the mean of the squares minus the squared mean. </li></ul>
    29. 30. The Variance
    30. 31. The Variance <ul><li>In what units is the variance? </li></ul><ul><li>Is that a problem? </li></ul>
    31. 32. The Standard Deviation
    32. 33. The Standard Deviation <ul><li>The standard deviation is the square root of the variance. Standard deviation is not the mean difference between individual data points and the mean of the array. </li></ul>
    33. 34. The Standard Deviation In what units is the standard deviation? Is that a problem?
    34. 35. The Coefficient of Variation * <ul><li>* Sometimes called the Relative Standard Deviation (RSD or %RSD) </li></ul>
    35. 36. Standard Deviation (or Error) of the Mean <ul><li>The standard deviation of an average decreases by the reciprocal of the square root of the number of data points used to calculate the average. </li></ul>
    36. 37. Exercises <ul><li>How many measurements must we average to improve our precision by a factor of 2? </li></ul>
    37. 38. Answer <ul><li>To improve precision by a factor of 2: </li></ul>
    38. 39. Exercises <ul><li>How many measurements must we average to improve our precision by a factor of 2? </li></ul><ul><li>How many to improve our precision by a factor of 10? </li></ul>
    39. 40. Answer <ul><li>To improve precision by a factor of 10: </li></ul>
    40. 41. Exercises <ul><li>How many measurements must we average to improve our precision by a factor of 2? </li></ul><ul><li>How many to improve our precision by a factor of 10? </li></ul><ul><li>If an assay has a CV of 7%, and we decide run samples in duplicate and average the measurements, what should the resulting CV be? </li></ul>
    41. 42. Answer <ul><li>Improvement in CV by running duplicates: </li></ul>
    42. 43. Population vs. Sample standard deviation <ul><li>When we speak of a population , we’re referring to the entire data set, which will have a mean  : </li></ul>
    43. 44. Population vs. Sample standard deviation <ul><li>When we speak of a population , we’re referring to the entire data set, which will have a mean  </li></ul><ul><li>When we speak of a sample , we’re referring to a subset of the population, customarily designated “x-bar” </li></ul><ul><li>Which is used to calculate the standard deviation? </li></ul>
    44. 45. “ Sir, I have found you an argument. I am not obliged to find you an understanding.” Samuel Johnson (1709-1784)
    45. 46. Population vs. Sample standard deviation
    46. 47. Distributions <ul><li>Definition </li></ul>
    47. 48. Statistical (probability) Distribution <ul><li>A statistical distribution is a mathematically-derived probability function that can be used to predict the characteristics of certain applicable real populations </li></ul><ul><li>Statistical methods based on probability distributions are parametric , since certain assumptions are made about the data </li></ul>
    48. 49. Distributions <ul><li>Definition </li></ul><ul><li>Examples </li></ul>
    49. 50. Binomial distribution <ul><li>The binomial distribution applies to events that have two possible outcomes . The probability of r successes in n attempts, when the probability of success in any individual attempt is p , is given by: </li></ul>
    50. 51. Example <ul><li>What is the probability that 10 of the 12 babies born one busy evening in your hospital will be girls? </li></ul>
    51. 52. Solution
    52. 53. Distributions <ul><li>Definition </li></ul><ul><li>Examples </li></ul><ul><ul><li>Binomial </li></ul></ul>
    53. 54. “ God does arithmetic” Karl Friedrich Gauss (1777-1855)
    54. 55. The Gaussian Distribution <ul><li>What is the Gaussian distribution? </li></ul>
    55. 56. 63 81 36 12 28 7 79 52 96 17 22 4 61 85 etc.
    56. 58. 63 81 36 12 28 7 79 52 96 17 22 4 61 85 22 73 54 33 99 5 61 28 58 24 16 77 43 8 + 85 152 90 45 127 12 140 70 154 41 38 81 104 93 =
    57. 60. . . . etc.
    58. 61. Probability x
    59. 62. The Gaussian Probability Function <ul><li>The probability of x in a Gaussian distribution with mean  and standard deviation  is given by: </li></ul>
    60. 63. The Gaussian Distribution <ul><li>What is the Gaussian distribution? </li></ul><ul><li>What types of data fit a Gaussian distribution? </li></ul>
    61. 64. “ Like the ski resort full of girls hunting for husbands and husbands hunting for girls, the situation is not as symmetrical as it might seem.” Alan Lindsay Mackay (1926- )
    62. 65. Are these Gaussian? <ul><li>Human height </li></ul><ul><li>Outside temperature </li></ul><ul><li>Raindrop size </li></ul><ul><li>Blood glucose concentration </li></ul><ul><li>Serum CK activity </li></ul><ul><li>QC results </li></ul><ul><li>Proficiency results </li></ul>
    63. 66. The Gaussian Distribution <ul><li>What is the Gaussian distribution? </li></ul><ul><li>What types of data fit a Gaussian distribution? </li></ul><ul><li>What is the advantage of using a Gaussian distribution? </li></ul>
    64. 67. Gaussian probability distribution Probability µ µ+  µ+2  µ+3  µ-  µ-2  µ-3  .67 .95
    65. 68. What are the odds of an observation . . . <ul><li>more than 1  from the mean (+/-) </li></ul><ul><li>more than 2  greater than the mean </li></ul><ul><li>more than 3  from the mean </li></ul>
    66. 69. Some useful Gaussian probabilities Range Probability Odds +/- 1.00  68.3% 1 in 3 +/- 1.64  90.0% 1 in 10 +/- 1.96  95.0% 1 in 20 +/- 2.58  99.0% 1 in 100
    67. 70. Example This That
    68. 71. [On the Gaussian curve] “Experimentalists think that it is a mathematical theorem while the mathematicians believe it to be an experimental fact.” Gabriel Lippman (1845-1921 )
    69. 72. Distributions <ul><li>Definition </li></ul><ul><li>Examples </li></ul><ul><ul><li>Binomial </li></ul></ul><ul><ul><li>Gaussian </li></ul></ul>
    70. 73. &quot;Life is good for only two things, discovering mathematics and teaching mathematics&quot; Siméon Poisson (1781-1840)
    71. 74. The Poisson Distribution <ul><li>The Poisson distribution predicts the frequency of r events occurring randomly in time, when the expected frequency is  </li></ul>
    72. 75. Examples of events described by a Poisson distribution <ul><li>Lightning </li></ul><ul><li>Accidents </li></ul><ul><li>Laboratory? </li></ul>?
    73. 76. A very useful property of the Poisson distribution
    74. 77. Using the Poisson distribution <ul><li>How many counts must be collected in an RIA in order to ensure an analytical CV of 5% or less? </li></ul>
    75. 78. Answer
    76. 79. Distributions <ul><li>Definition </li></ul><ul><li>Examples </li></ul><ul><ul><li>Binomial </li></ul></ul><ul><ul><li>Gaussian </li></ul></ul><ul><ul><li>Poisson </li></ul></ul>
    77. 80. The Student’s t Distribution <ul><li>When a small sample is selected from a large population, we sometimes have to make certain assumptions in order to apply statistical methods </li></ul>
    78. 81. Questions about our sample <ul><li>Is the mean of our sample, x bar, the same as the mean of the population,  ? </li></ul><ul><li>Is the standard deviation of our sample, s , the same as the standard deviation for the population,  ? </li></ul><ul><li>Unless we can answer both of these questions affirmatively, we don’t know whether our sample has the same distribution as the population from which it was drawn. </li></ul>
    79. 82. <ul><li>Recall that the Gaussian distribution is defined by the probability function: </li></ul><ul><li>Note that the exponential factor contains both  and  , both population parameters. The factor is often simplified by making the substitution: </li></ul>
    80. 83. <ul><li>The variable z in the equation: </li></ul><ul><li>is distributed according to a unit gaussian , since it has a mean of zero and a standard deviation of 1 </li></ul>
    81. 84. Gaussian probability distribution Probability 0 1 2 3 -1 -2 -3 .95 z .67
    82. 85. <ul><li>But if we use the sample mean and standard deviation instead, we get: </li></ul><ul><li>and we’ve defined a new quantity, t , which is not distributed according to the unit Gaussian. It is distributed according to the Student’s t distribution . </li></ul>
    83. 86. Important features of the Student’s t distribution <ul><li>Use of the t statistic assumes that the parent distribution is Gaussian </li></ul><ul><li>The degree to which the t distribution approximates a gaussian distribution depends on N (the degrees of freedom) </li></ul><ul><li>As N gets larger (above 30 or so), the differences between t and z become negligible </li></ul>
    84. 87. Application of Student’s t distribution to a sample mean <ul><li>The Student’s t statistic can also be used to analyze differences between the sample mean and the population mean: </li></ul>
    85. 88. Comparison of Student’s t and Gaussian distributions <ul><li>Note that, for a sufficiently large N (>30), t can be replaced with z , and a Gaussian distribution can be assumed </li></ul>
    86. 89. Exercise <ul><li>The mean age of the 20 participants in one workshop is 27 years, with a standard deviation of 4 years. Next door, another workshop has 16 participants with a mean age of 29 years and standard deviation of 6 years. </li></ul><ul><li>Is the second workshop attracting older technologists? </li></ul>
    87. 90. Preliminary analysis <ul><li>Is the population Gaussian? </li></ul><ul><li>Can we use a Gaussian distribution for our sample? </li></ul><ul><li>What statistic should we calculate? </li></ul>
    88. 91. Solution <ul><li>First, calculate the t statistic for the two means: </li></ul>
    89. 92. Solution, cont. <ul><li>Next, determine the degrees of freedom: </li></ul>
    90. 93. Statistical Tables
    91. 94. Conclusion <ul><li>Since 1.16 is less than 1.64 (the t value corresponding to 90% confidence limit), the difference between the mean ages for the participants in the two workshops is not significant </li></ul>
    92. 95. The Paired t Test <ul><li>Suppose we are comparing two sets of data in which each value in one set has a corresponding value in the other. Instead of calculating the difference between the means of the two sets, we can calculate the mean difference between data pairs. </li></ul>
    93. 96. <ul><li>Instead of: </li></ul><ul><li>we use: </li></ul><ul><li>to calculate t: </li></ul>
    94. 97. Advantage of the Paired t <ul><li>If the type of data permit paired analysis, the paired t test is much more sensitive than the unpaired t. </li></ul><ul><li>Why? </li></ul>
    95. 98. Applications of the Paired t <ul><li>Method correlation </li></ul><ul><li>Comparison of therapies </li></ul>
    96. 99. Distributions <ul><li>Definition </li></ul><ul><li>Examples </li></ul><ul><ul><li>Binomial </li></ul></ul><ul><ul><li>Gaussian </li></ul></ul><ul><ul><li>Poisson </li></ul></ul><ul><ul><li>Student’s t </li></ul></ul>
    97. 100. The  2 (Chi-square) Distribution <ul><li>There is a general formula that relates actual measurements to their predicted values </li></ul>
    98. 101. The  2 (Chi-square) Distribution <ul><li>A special (and very useful) application of the  2 distribution is to frequency data </li></ul>
    99. 102. Exercise <ul><li>In your hospital, you have had 83 cases of iatrogenic strep infection in your last 725 patients. St. Elsewhere, across town, reports 35 cases of strep in their last 416 patients. </li></ul><ul><li>Do you need to review your infection control policies? </li></ul>
    100. 103. Analysis <ul><li>If your infection control policy is roughly as effective as St. Elsewhere’s, we would expect that the rates of strep infection for the two hospitals would be similar. The expected frequency, then would be the average </li></ul>
    101. 104. Calculating  2 <ul><li>First, calculate the expected frequencies at your hospital ( f 1 ) and St. Elsewhere ( f 2 ) </li></ul>
    102. 105. Calculating  2 <ul><li>Next, we sum the squared differences between actual and expected frequencies </li></ul>
    103. 106. Degrees of freedom <ul><li>In general, when comparing k sample proportions, the degrees of freedom for  2 analysis are k - 1. Hence, for our problem, there is 1 degree of freedom. </li></ul>
    104. 107. Conclusion <ul><li>A table of  2 values lists 3.841 as the  2 corresponding to a probability of 0.05. </li></ul><ul><li>So the variation (  2  between strep infection rates at the two hospitals is within statistically-predicted limits, and therefore is not significant. </li></ul>
    105. 108. Distributions <ul><li>Definition </li></ul><ul><li>Examples </li></ul><ul><ul><li>Binomial </li></ul></ul><ul><ul><li>Gaussian </li></ul></ul><ul><ul><li>Poisson </li></ul></ul><ul><ul><li>Student’s t </li></ul></ul><ul><ul><li> 2 </li></ul></ul>
    106. 109. The F distribution <ul><li>The F distribution predicts the expected differences between the variances of two samples </li></ul><ul><li>This distribution has also been called Snedecor’s F distribution, Fisher distribution, and variance ratio distribution </li></ul>
    107. 110. The F distribution <ul><li>The F statistic is simply the ratio of two variances </li></ul><ul><li>(by convention, the larger V is the numerator) </li></ul>
    108. 111. Applications of the F distribution <ul><li>There are several ways the F distribution can be used. Applications of the F statistic are part of a more general type of statistical analysis called analysis of variance (ANOVA). We’ll see more about ANOVA later. </li></ul>
    109. 112. Example <ul><li>You’re asked to do a “quick and dirty” correlation between three whole blood glucose analyzers. You prick your finger and measure your blood glucose four times on each of the analyzers. </li></ul><ul><li>Are the results equivalent? </li></ul>
    110. 113. Data
    111. 114. Analysis <ul><li>The mean glucose concentrations for the three analyzers are 70, 85, and 76. </li></ul><ul><li>If the three analyzers are equivalent, then we can assume that all of the results are drawn from a overall population with mean  and variance  2 . </li></ul>
    112. 115. Analysis, cont. <ul><li>Approximate  by calculating the mean of the means: </li></ul>
    113. 116. Analysis, cont. <ul><li>Calculate the variance of the means: </li></ul>
    114. 117. Analysis, cont. <ul><li>But what we really want is the variance of the population. Recall that: </li></ul>
    115. 118. Analysis, cont. <ul><li>Since we just calculated </li></ul><ul><li>we can solve for  </li></ul>
    116. 119. Analysis, cont. <ul><li>So we now have an estimate of the population variance, which we’d like to compare to the real variance to see whether they differ. But what is the real variance? </li></ul><ul><li>We don’t know, but we can calculate the variance based on our individual measurements. </li></ul>
    117. 120. Analysis, cont. <ul><li>If all the data were drawn from a larger population, we can assume that the variances are the same, and we can simply average the variances for the three data sets. </li></ul>
    118. 121. Analysis, cont. <ul><li>Now calculate the F statistic: </li></ul>
    119. 122. Conclusion <ul><li>A table of F values indicates that 4.26 is the limit for the F statistic at a 95% confidence level (when the appropriate degrees of freedom are selected). Our value of 10.6 exceeds that, so we conclude that there is significant variation between the analyzers. </li></ul>
    120. 123. Distributions <ul><li>Definition </li></ul><ul><li>Examples </li></ul><ul><ul><li>Binomial </li></ul></ul><ul><ul><li>Gaussian </li></ul></ul><ul><ul><li>Poisson </li></ul></ul><ul><ul><li>Student’s t </li></ul></ul><ul><ul><li> 2 </li></ul></ul><ul><ul><li>F </li></ul></ul>
    121. 124. Unknown or irregular distribution <ul><li>Transform </li></ul>
    122. 125. Log transform Probability x Probability log x
    123. 126. Unknown or irregular distribution <ul><li>Transform </li></ul><ul><li>Non-parametric methods </li></ul>
    124. 127. Non-parametric methods <ul><li>Non-parametric methods make no assumptions about the distribution of the data </li></ul><ul><li>There are non-parametric methods for characterizing data, as well as for comparing data sets </li></ul><ul><li>These methods are also called distribution-free, robust, or sometimes non-metric tests </li></ul>
    125. 128. Application to Reference Ranges <ul><li>The concentrations of most clinical analytes are not usually distributed in a Gaussian manner. Why? </li></ul><ul><li>How do we determine the reference range (limits of expected values) for these analytes? </li></ul>
    126. 129. Application to Reference Ranges <ul><li>Reference ranges for normal, healthy populations are customarily defined as the “central 95%”. </li></ul><ul><li>An entirely non-parametric way of expressing this is to eliminate the upper and lower 2.5% of data, and use the remaining upper and lower values to define the range. </li></ul><ul><li>NCCLS recommends 120 values, dropping the two highest and two lowest. </li></ul>
    127. 130. Application to Reference Ranges <ul><li>What happens when we want to compare one reference range with another? This is precisely what CLIA ‘88 requires us to do. </li></ul><ul><li>How do we do this? </li></ul>
    128. 131. “ Everything should be made as simple as possible, but not simpler.” Albert Einstein
    129. 132. Solution #1: Simple comparison <ul><li>Suppose we just do a small internal reference range study, and compare our results to the manufacturer’s range. </li></ul><ul><li>How do we compare them? </li></ul><ul><li>Is this a valid approach? </li></ul>
    130. 133. NCCLS recommendations <ul><li>Inspection Method: Verify reference populations are equivalent </li></ul><ul><li>Limited Validation: Collect 20 reference specimens </li></ul><ul><ul><li>No more than 2 exceed range </li></ul></ul><ul><ul><li>Repeat if failed </li></ul></ul><ul><li>Extended Validation: Collect 60 reference specimens; compare ranges . </li></ul>
    131. 134. Solution #2: Mann-Whitney * <ul><li>Rank normal values ( x 1 ,x 2 ,x 3 ...x n ) and the reference population ( y 1 ,y 2 ,y 3 ...y n ): </li></ul><ul><li>x 1 , y 1 , x 2 , x 3 , y 2 , y 3 ... x n , y n </li></ul><ul><li>Count the number of y values that follow each x , and call the sum U x . Calculate U y also. </li></ul><ul><li>* Also called the U test, rank sum test, or Wilcoxen’s test . </li></ul>
    132. 135. Mann-Whitney, cont. <ul><li>It should be obvious that: U x + U y = N x N y </li></ul><ul><li>If the two distributions are the same, then: </li></ul><ul><li>U x = U y = 1/2 N x N y </li></ul><ul><li>Large differences between U x and U y indicate that the distributions are not equivalent </li></ul>
    133. 136. “‘ Obvious’ is the most dangerous word in mathematics.” Eric Temple Bell (1883-1960)
    134. 137. Solution #3: Run test <ul><li>In the run test , order the values in the two distributions as before: </li></ul><ul><li>x 1 , y 1 , x 2 , x 3 , y 2 , y 3 ... x n , y n </li></ul><ul><li>Add up the number of runs (consecutive values from the same distribution). If the two data sets are randomly selected from one population, there will be few runs. </li></ul>
    135. 138. Solution #4: The Monte Carlo method <ul><li>Sometimes, when we don’t know anything about a distribution, the best thing to do is independently test its characteristics. </li></ul>
    136. 139. The Monte Carlo method x y
    137. 140. The Monte Carlo method Reference population mean, SD mean, SD mean, SD mean, SD N N N N
    138. 141. The Monte Carlo method <ul><li>With the Monte Carlo method, we have simulated the test we wish to apply--that is, we have randomly selected samples from the parent distribution, and determined whether our in-house data are in agreement with the randomly-selected samples. </li></ul>
    139. 142. Analysis of paired data <ul><li>For certain types of laboratory studies, the data we gather is paired </li></ul><ul><li>We typically want to know how closely the paired data agree </li></ul><ul><li>We need quantitative measures of the extent to which the data agree or disagree </li></ul><ul><li>Examples? </li></ul>
    140. 143. Examples of paired data <ul><li>Method correlation data </li></ul><ul><li>Pharmacodynamic effects </li></ul><ul><li>Risk analysis </li></ul><ul><li>Pathophysiology </li></ul>
    141. 144. Correlation 0 5 10 15 20 25 30 35 40 45 50 0 5 10 15 20 25 30 35 40 45 50
    142. 145. Linear regression (least squares) <ul><li>Linear regression analysis generates an equation for a straight line </li></ul><ul><li>y = mx + b </li></ul><ul><li>where m is the slope of the line and b is the value of y when x = 0 (the y-intercept ). </li></ul><ul><li>The calculated equation minimizes the differences between actual y values and the linear regression line. </li></ul>
    143. 146. Correlation 0 5 10 15 20 25 30 35 40 45 50 0 5 10 15 20 25 30 35 40 45 50 y = 1.031 x - 0.024
    144. 147. Covariance <ul><li>Do x and y values vary in concert, or randomly? </li></ul>
    145. 148. <ul><li>What if y increases when x increases? </li></ul><ul><li>What if y decreases when x increases? </li></ul><ul><li>What if y and x vary independently ? </li></ul>
    146. 149. Covariance <ul><li>It is clear that the greater the covariance, the stronger the relationship between x and y . </li></ul><ul><li>But . . . what about units? </li></ul><ul><li>e.g., if you measure glucose in mg/dL , and I measure it in mmol/L , who’s likely to have the highest covariance? </li></ul>
    147. 150. The Correlation Coefficient
    148. 151. The Correlation Coefficient <ul><li>The correlation coefficient is a unitless quantity that roughly indicates the degree to which x and y vary in the same direction. </li></ul><ul><li> is useful for detecting relationships between parameters, but it is not a very sensitive measure of the spread . </li></ul>
    149. 152. Correlation 0 5 10 15 20 25 30 35 40 45 50 0 5 10 15 20 25 30 35 40 45 50 y = 1.031 x - 0.024  = 0.9986
    150. 153. Correlation 0 5 10 15 20 25 30 35 40 45 50 0 5 10 15 20 25 30 35 40 45 50 y = 1.031 x - 0.024  = 0.9894
    151. 154. Standard Error of the Estimate <ul><li>The linear regression equation gives us a way to calculate an “estimated” y for any given x value, given the symbol ŷ (y-hat) : </li></ul>
    152. 155. Standard Error of the Estimate <ul><li>Now what we are interested in is the average difference between the measured y and its estimate, ŷ : </li></ul>
    153. 156. Correlation 0 5 10 15 20 25 30 35 40 45 50 0 5 10 15 20 25 30 35 40 45 50 y = 1.031 x - 0.024  = 0.9986 s y/x =1.83
    154. 157. Correlation 0 5 10 15 20 25 30 35 40 45 50 0 5 10 15 20 25 30 35 40 45 50 y = 1.031 x - 0.024  = 0.9894 s y/x = 5.32
    155. 158. Standard Error of the Estimate <ul><li>If we assume that the errors in the y measurements are Gaussian ( is that a safe assumption? ), then the standard error of the estimate gives us the boundaries within which 67% of the y values will fall. </li></ul><ul><li> 2s y/x defines the 95% boundaries.. </li></ul>
    156. 159. Limitations of linear regression <ul><li>Assumes no error in x measurement </li></ul><ul><li>Assumes that variance in y is constant throughout concentration range </li></ul>
    157. 160. Alternative approaches <ul><li>Weighted linear regression analysis can compensate for non-constant variance among y measurements </li></ul><ul><li>Deming regression analysis takes into account variance in the x measurements </li></ul><ul><li>Weighted Deming regression analysis allows for both </li></ul>
    158. 161. Evaluating method performance <ul><li>Precision </li></ul>
    159. 162. Method Precision <ul><li>Within-run: 10 or 20 replicates </li></ul><ul><ul><li>What types of errors does within-run precision reflect? </li></ul></ul><ul><li>Day-to-day: NCCLS recommends evaluation over 20 days </li></ul><ul><ul><li>What types of errors does day-to-day precision reflect? </li></ul></ul>
    160. 163. Evaluating method performance <ul><li>Precision </li></ul><ul><li>Sensitivity </li></ul>
    161. 164. Method Sensitivity <ul><li>The analytical sensitivity of a method refers to the lowest concentration of analyte that can be reliably detected. </li></ul><ul><li>The most common definition of sensitivity is the analyte concentration that will result in a signal two or three standard deviations above background. </li></ul>
    162. 165. Signal time Signal/Noise threshold
    163. 166. Other measures of sensitivity <ul><li>Limit of Detection (LOD) is sometimes defined as the concentration producing an S/N > 3. </li></ul><ul><ul><li>In drug testing, LOD is customarily defined as the lowest concentration that meets all identification criteria. </li></ul></ul><ul><li>Limit of Quantitation (LOQ) is sometimes defined as the concentration producing an S/N >5. </li></ul><ul><ul><li>In drug testing, LOQ is customarily defined as the lowest concentration that can be measured within ±20%. </li></ul></ul>
    164. 167. Question <ul><li>At an S/N ratio of 5, what is the minimum CV of the measurement? </li></ul><ul><li>If the S/N is 5, 20% of the measured signal is noise, which is random. Therefore, the CV must be at least 20%. </li></ul>
    165. 168. Evaluating method performance <ul><li>Precision </li></ul><ul><li>Sensitivity </li></ul><ul><li>Linearity </li></ul>
    166. 169. Method Linearity <ul><li>A linear relationship between concentration and signal is not absolutely necessary, but it is highly desirable. Why? </li></ul><ul><li>CLIA ‘88 requires that the linearity of analytical methods is verified on a periodic basis. </li></ul>
    167. 170. Ways to evaluate linearity <ul><li>Visual/linear regression </li></ul>
    168. 171. Signal Concentration
    169. 172. Outliers <ul><li>We can eliminate any point that differs from the next highest value by more than 0.765 (p=0.05) times the spread between the highest and lowest values (Dixon test). </li></ul><ul><li>Example: 4, 5, 6, 13 </li></ul><ul><li>(13 - 4) x 0.765 = 6.89 </li></ul>
    170. 173. Limitation of linear regression method <ul><li>If the analytical method has a high variance (CV), it is likely that small deviations from linearity will not be detected due to the high standard error of the estimate </li></ul>
    171. 174. Signal Concentration
    172. 175. Ways to evaluate linearity <ul><li>Visual/linear regression </li></ul><ul><li>Quadratic regression </li></ul>
    173. 176. Quadratic regression <ul><li>Recall that, for linear data, the relationship between x and y can be expressed as </li></ul><ul><li>y = f ( x ) = a + bx </li></ul>
    174. 177. Quadratic regression <ul><li>A curve is described by the quadratic equation: </li></ul><ul><li>y = f ( x ) = a + bx + cx 2 </li></ul><ul><li>which is identical to the linear equation except for the addition of the cx 2 term. </li></ul>
    175. 178. Quadratic regression <ul><li>It should be clear that the smaller the x 2 coefficient, c , the closer the data are to linear (since the equation reduces to the linear form when c approaches 0). </li></ul><ul><li>What is the drawback to this approach? </li></ul>
    176. 179. Ways to evaluate linearity <ul><li>Visual/linear regression </li></ul><ul><li>Quadratic regression </li></ul><ul><li>Lack-of-fit analysis </li></ul>
    177. 180. Lack-of-fit analysis <ul><li>There are two components of the variation from the regression line </li></ul><ul><ul><li>Intrinsic variability of the method </li></ul></ul><ul><ul><li>Variability due to deviations from linearity </li></ul></ul><ul><li>The problem is to distinguish between these two sources of variability </li></ul><ul><li>What statistical test do you think is appropriate? </li></ul>
    178. 181. Signal Concentration
    179. 182. Lack-of-fit analysis <ul><li>The ANOVA technique requires that method variance is constant at all concentrations. Cochran’s test is used to test whether this is the case. </li></ul>
    180. 183. Lack-of-fit method calculations <ul><li>Total sum of the squares: the variance calculated from all of the y values </li></ul><ul><li>Linear regression sum of the squares: the variance of y values from the regression line </li></ul><ul><li>Residual sum of the squares: difference between TSS and LSS </li></ul><ul><li>Lack of fit sum of the squares: the RSS minus the pure error (sum of variances) </li></ul>
    181. 184. Lack-of-fit analysis <ul><li>The LOF is compared to the pure error to give the “ G ” statistic (which is actually F ) </li></ul><ul><li>If the LOF is small compared to the pure error, G is small and the method is linear </li></ul><ul><li>If the LOF is large compared to the pure error, G will be large, indicating significant deviation from linearity </li></ul>
    182. 185. Significance limits for G <ul><li>90% confidence = 2.49 </li></ul><ul><li>95% confidence = 3.29 </li></ul><ul><li>99% confidence = 5.42 </li></ul>
    183. 186. “ If your experiment needs statistics, you ought to have done a better experiment.” Ernest Rutherford (1871-1937)
    184. 187. Evaluating Clinical Performance of laboratory tests <ul><li>The clinical performance of a laboratory test defines how well it predicts disease </li></ul><ul><li>The sensitivity of a test indicates the likelihood that it will be positive when disease is present </li></ul>
    185. 188. Clinical Sensitivity <ul><li>If TP as the number of “true positives”, and FN is the number of “false negatives”, the sensitivity is defined as: </li></ul>
    186. 189. Example <ul><li>Of 25 admitted cocaine abusers, 23 tested positive for urinary benzoylecgonine and 2 tested negative. What is the sensitivity of the urine screen? </li></ul>
    187. 190. Evaluating Clinical Performance of laboratory tests <ul><li>The clinical performance of a laboratory test defines how well it predicts disease </li></ul><ul><li>The sensitivity of a test indicates the likelihood that it will be positive when disease is present </li></ul><ul><li>The specificity of a test indicates the likelihood that it will be negative when disease is absent </li></ul>
    188. 191. Clinical Specificity <ul><li>If TN is the number of “true negative” results, and FP is the number of falsely positive results, the specificity is defined as: </li></ul>
    189. 192. Example <ul><li>What would you guess is the specificity of any particular clinical laboratory test? (Choose any one you want) </li></ul>
    190. 193. Answer <ul><li>Since reference ranges are customarily set to include the central 95% of values in healthy subjects, we expect 5% of values from healthy people to be “abnormal”--this is the false positive rate. </li></ul><ul><li>Hence, the specificity of most clinical tests is no better than 95%. </li></ul>
    191. 194. Sensitivity vs. Specificity <ul><li>Sensitivity and specificity are inversely related. </li></ul>
    192. 195. Marker concentration - + Disease
    193. 196. Sensitivity vs. Specificity <ul><li>Sensitivity and specificity are inversely related. </li></ul><ul><li>How do we determine the best compromise between sensitivity and specificity? </li></ul>
    194. 197. Receiver Operating Characteristic True positive rate (sensitivity) False positive rate 1-specificity
    195. 198. Evaluating Clinical Performance of laboratory tests <ul><li>The sensitivity of a test indicates the likelihood that it will be positive when disease is present </li></ul><ul><li>The specificity of a test indicates the likelihood that it will be negative when disease is absent </li></ul><ul><li>The predictive value of a test indicates the probability that the test result correctly classifies a patient </li></ul>
    196. 199. Predictive Value <ul><li>The predictive value of a clinical laboratory test takes into account the prevalence of a certain disease, to quantify the probability that a positive test is associated with the disease in a randomly-selected individual, or alternatively, that a negative test is associated with health. </li></ul>
    197. 200. Illustration <ul><li>Suppose you have invented a new screening test for Addison disease. </li></ul><ul><li>The test correctly identified 98 of 100 patients with confirmed Addison disease ( What is the sensitivity? ) </li></ul><ul><li>The test was positive in only 2 of 1000 patients with no evidence of Addison disease ( What is the specificity? ) </li></ul>
    198. 201. Test performance <ul><li>The sensitivity is 98.0% </li></ul><ul><li>The specificity is 99.8% </li></ul><ul><li>But Addison disease is a rare disorder--incidence = 1:10,000 </li></ul><ul><li>What happens if we screen 1 million people? </li></ul>
    199. 202. Analysis <ul><li>In 1 million people, there will be 100 cases of Addison disease. </li></ul><ul><li>Our test will identify 98 of these cases ( TP ) </li></ul><ul><li>Of the 999,900 non-Addison subjects, the test will be positive in 0.2%, or about 2,000 ( FP ). </li></ul>
    200. 203. Predictive value of the positive test <ul><li>The predictive value is the % of all positives that are true positives: </li></ul>
    201. 204. What about the negative predictive value? <ul><li>TN = 999,900 - 2000 = 997,900 </li></ul><ul><li>FN = 100 * 0.002 = 0 (or 1) </li></ul>
    202. 205. Summary of predictive value <ul><li>Predictive value describes the usefulness of a clinical laboratory test in the real world. </li></ul><ul><li>Or does it? </li></ul>
    203. 206. Lessons about predictive value <ul><li>Even when you have a very good test, it is generally not cost effective to screen for diseases which have low incidence in the general population. Exception? </li></ul><ul><li>The higher the clinical suspicion, the better the predictive value of the test. Why? </li></ul>
    204. 207. Efficiency <ul><li>We can combine the PV + and PV - to give a quantity called the efficiency : </li></ul><ul><li>The efficiency is the percentage of all patients that are classified correctly by the test result. </li></ul>
    205. 208. Efficiency of our Addison screen
    206. 209. “ To call in the statistician after the experiment is done may be no more than asking him to perform a postmortem examination: he may be able to say what the experiment died of.” Ronald Aylmer Fisher (1890 - 1962)
    207. 210. Application of Statistics to Quality Control <ul><li>We expect quality control to fit a Gaussian distribution </li></ul><ul><li>We can use Gaussian statistics to predict the variability in quality control values </li></ul><ul><li>What sort of tolerance will we allow for variation in quality control values? </li></ul><ul><li>Generally, we will question variations that have a statistical probability of less than 5% </li></ul>
    208. 211. “ He uses statistics as a drunken man uses lamp posts -- for support rather than illumination.” Andrew Lang (1844-1912)
    209. 212. Westgard’s rules <ul><li>1 2s </li></ul><ul><li>1 3s </li></ul><ul><li>2 2s </li></ul><ul><li>R 4s </li></ul><ul><li>4 1s </li></ul><ul><li>10 x </li></ul><ul><li>1 in 20 </li></ul><ul><li>1 in 300 </li></ul><ul><li>1 in 400 </li></ul><ul><li>1 in 800 </li></ul><ul><li>1 in 600 </li></ul><ul><li>1 in 1000 </li></ul>
    210. 213. Some examples mean +1sd +2sd +3sd -1sd -2sd -3sd
    211. 214. Some examples mean +1sd +2sd +3sd -1sd -2sd -3sd
    212. 215. Some examples mean +1sd +2sd +3sd -1sd -2sd -3sd
    213. 216. Some examples mean +1sd +2sd +3sd -1sd -2sd -3sd
    214. 217. “ In science one tries to tell people, in such a way as to be understood by everyone, something that no one ever knew before. But in poetry, it's the exact opposite.” Paul Adrien Maurice Dirac (1902- 1984)

    ×