Sample determinants and size

3,792 views

Published on

Sample size, effect size, Type I, II errors, rule of thumb

Published in: Health & Medicine, Technology
0 Comments
5 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
3,792
On SlideShare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
216
Comments
0
Likes
5
Embeds 0
No embeds

No notes for slide

Sample determinants and size

  1. 1. Sample determinants and sample size Dr Tarek Tawfik Amin Public Health Department, Cairo University amin55@myway.com
  2. 2. Objectives By the end of this session, attendants should be able to: 1) Recognize the importance of proper sample size. 2) Identify the essential components for sample size calculation for clinical and epidemiological researches. 3) Practically employ different software to calculate sample size for different scenarios.
  3. 3. Sample Size Determination
  4. 4. Why it is important? • Integral part of quantitative research. • Ensuring validity, accuracy, reliability, scientific and ethical integrity of research.
  5. 5. Considerations in sample size calculation Three main concepts to be considered: • Estimation (depends on several components). • Justification (in the light of budgetary or biological considerations) • Adjustments (accounting for potential dropouts or effect of covariates)
  6. 6. Role of Pilot Studies • Preliminary study intended to test feasibility, data collection methods, and collect information for sample size calculations. • Not a study (too small to produce a definitive answer) • As a tool in finding the answers. • Sample size calculation is not required
  7. 7. Importance of Sample Size calculation • Scientific reasons • Ethical reasons • Economic reasons
  8. 8. I-Scientific Reasons • In a trial with negative results and a sufficient sample size, the result is concrete (treatment has no effect-no difference). • In a trial with negative results and insufficient power (insufficient sample size), may mistakenly conclude that the treatment under study made no difference (false conclusion).
  9. 9. II-Ethical Reasons • Undersized study can expose subjects to potentially harmful treatments without the capability to advance knowledge. • Oversized study has the potential to expose an unnecessarily large number of subjects to potentially harmful treatments.
  10. 10. III-Economic Reasons • Undersized study is a waste of resources due to its inability to yield a meaningful useful results. • Oversized study potential of statistically significant result with doubtful clinical significance leading to waste of resources.
  11. 11. Approaches to sample size calculation • Precision analysis – Bayesian – Frequentist • Power analysis – Most common
  12. 12. A-Precision Analysis Applicable in studies concerned with estimating parameters: – Precision – Accuracy – Prevalence
  13. 13. B-Power Analysis • In studies concerned with detecting an effect. • Important to ensure that if a clinically meaningful effect exists, there is a high chance of it being detected
  14. 14. Factors Influencing Sample Size Calculations 1- The objective (precision, power analysis) 2- Details of intervention and control trial. 3- The outcomes – – – – – Categorical - continuous Single - multiple Primary-Secondary Clinical relevance Missed data
  15. 15. Factors Influencing Sample Size Calculations 4- Possible covariates to control (confounders). 5- The unit of randomization/analysis. – – – – Individuals/Family practices Hospital wards Communities Families
  16. 16. 6- The research design: – Simple RCT-Cluster RCT – Equivalence – Non-randomized intervention study – Observational study – Prevalence study – Sensitivity and specificity – Paired comparison – Repeated-measures study 7- Research subjects - Target population - Inclusion-exclusion criteria - Baseline risk - Compliance rate - Drop-out rate
  17. 17. 8- Parameters a- Desired level of significance b- Desired power c- One or two-tails d- Possible ranges or variations in expected outcome. e- The smallest difference: – Smallest clinically important difference f- Justification of previous data: – Published data, Previous work – Review of records and experts opinion g- Software or formula being used:
  18. 18. Effect size • The numerical value summarizing the difference of interest (effect size) – Odds Ratio (OR) – Relative Risk (RR) – Risk Difference (RD) – Difference Between Means – Correlation Coefficient Null, OR=1 Null, RR=1 Null, RD=0 Null, D =0 Null, r= 0
  19. 19. Statistical Terms • P-value: Probability of obtaining an effect as extreme or more extreme than what is observed by chance. • Significance level of a test: Cut-off point for the pvalue (conventionally it is 5% or 0.05). • Power of a test: Correctly reject the null hypothesis when there is indeed a real difference or association (typically set at least 80%). • Effect size of clinical importance.
  20. 20. [One or two sided] Two-sided test • Alternative hypothesis suggests that a difference exists in either direction • Should be used unless there is a very good reason for doing otherwise One-sided test • When it is completely unlikely that the result could go in either direction, or the only concern is in one direction – – – – Toxicity studies Safety evaluation Adverse drug reactions Risk analysis
  21. 21. Approach in calculating the sample size 1. 2. 3. 4. 5. 6. Specify your hypothesis. Specify the significance level ( ). Specify an effect size. Obtain historical values (previous research). Specify a power ( ). Use appropriate formula to calculate sample size.
  22. 22. Components of sample size calculations Acceptable level of type I and type II errors Appropriate statistical power Effect size Significance Estimated measurement of variability Design effect in survey
  23. 23. Type I, and II errors
  24. 24. Possible situations in Hypothesis testing Level of significance (0.05) Confidence False rejection/false positive Reality Decision Reject H0 H0 is true H0 is not true Do not reject H0 Type I error (ά) OK (1-ά) OK (1- ) Type II error ( ) 1- = Power It is the probability to reject the null hypothesis if is NOT TRUE. Usually 80% is the least required for any test False acceptance/false negative
  25. 25. Type I and type II errors Type I error or alpha (false-positive) :Rejecting the null when it is true. Type II error or beta (false-negative) : Accepting the null when it is false. 12/21/2013 Dr. Tarek Tawfik
  26. 26.  The probability of committing a type I error (rejecting the null when it is actually true) is called (alpha), another name is the level of statistical significance.  An level of 0.05, setting 5 % as the maximum chance of incorrectly rejecting the null hypothesis. 12/21/2013 Dr. Tarek Tawfik
  27. 27.  The probability of making a type II error (failing to reject the null hypothesis when it is actually false) is called (beta).  The quantity (1- ) is called power, the ability to detect the difference of a given size.  If is set at 0.10, we are willing to accept a 10 % chance of missing an association of a given effect size.  This represents a power of 90 % (there is 90 % chance of finding an association of that size). 12/21/2013 Dr. Tarek Tawfik
  28. 28. P value  A ‘non significant’ result (i.e., one with a P value greater than >0.05) does not mean that there is no association in the population, it only means that the result observed in the sample is small compared with that occurred by chance alone. 12/21/2013 Dr. Tarek Tawfik
  29. 29. Estimated measurement of variability - The expected standard deviation in the measurement made within each comparison group. - If the variability increases, sample size increases.
  30. 30. Z and Z for calculating the sample size Significance level Z ( ) critical value* Power Z ( ) power 0.01(99%) 2.576 2.326 0.08 0.842 0.02(98%) 2.326 1.645 0.85 1.036 0.05 (95%) 1.960 1.282 0.90 1.282 0.10 (90%) 1.645 0.95 1.645 *One tail **Two tails **
  31. 31. Sample size for comparative studies (dichotomous outcomes) 2 Z n P * (1 P * ) Z Pe (1 Pe ) 2 P* ( Pe Pc ) / 2 =Pe -Pc Pe= experimental Pc= control Power=0.842 Significance=1.960 Pc (1 Pc )
  32. 32. An investigator hypothesizes that caffeine is better than aminophylline in terms of reducing apnea of prematurity. Previous studies have reported an efficacy of 40% for aminophylline. To detect a 5 % difference between them with power of 80% and two tailed test of 5% significance level, what sample size would be needed? N= {1.960√ [0.375(1-0.375)] +0.840√ [0.35 (1-0.35) + 0.4(1-0.4)]} 2 ⁄ 0.05 2 Sample size required per group is 876. For correction of continuity and high degree of accuracy one need to increase the sample size by 2/(Pe - Pc). Then final sample size would be 896 per group.
  33. 33. Sample size calculations for comparative studies (continuous outcome) N 4 2 (Z Z ) 2 = Standard deviation of the outcome variable Z = confidence level=1.960 Z = Power= 0.842 D2 = the effect size D 2
  34. 34. An investigator plans a randomized control trial of the effect of salbutamol and ipratropium bromide on FEV 1 after 2 weeks of treatment. Previous study has reported mean FEV 1 in persons treated with asthma was 2 liters with a standard deviation of 1 liter. If the investigator tries to detect a difference of 10% between them, how many individual will be required for the study? N= 4*1 2(1.960+0.842) 2 / 0.2 2 =785 person required.
  35. 35. Sample size for descriptive studies: continuous variable N 2 4*Z W Z =Confidence level=1.960 S= Standard deviation W= Width of Confidence interval 2 S 2
  36. 36. Suppose an investigator wants to detect the mean weight of newborns between 30-34 week of gestation with 95% confidence interval not more than ±0.1 kg. From the previous study the standard deviation has been reported of 1 kg, then the sample size required would be, N = 4*1.96 2*1 2/0.2 2=384 newborns required.
  37. 37. Descriptive study: Dichotomous variable N 4*Z 2 * P (1 W Z = Confidence level=1.960 W= width of C.I P= pre study estimate of proportion 2 P)
  38. 38. Let us consider that an investigator wish to determine the incidence of nosocomial pneumonia (NP) in neonatal intensive care with 95% confidence level. He selected a confidence interval of ± 10 and the mean incidence NP has been reported earlier is 20%. Then the required sample size would be N = 4*1.96 2*0.20(1-0.20)/ 0.20 2 = 62
  39. 39. Strategies For Maximizing Power and Minimizing the Sample Size • Use common outcomes. • Use paired design (such as cross-over trial) • Use continuous variables
  40. 40. General Rules of Thumb 1- Don’t forget multiplicity testing corrections (Bonferroni) 2- Better to be conservative (assume two-sided). 3- Remember that sample size calculation gives you the minimum required. 4- None RCTs require a much larger sample to allow adjustment for confounders. 5- Equivalence studies need a larger sample size than studies aimed to demonstrate a difference.
  41. 41. General Rules of Thumb • For moderate to large effect size (0.5<effect size<0.8), 30 subjects per group. • For comparison between 3 or more groups, to detect a moderate effect size of 0.5 with 80% power, require 14 subjects/group.
  42. 42. Rules of Thumb for Associations • Multiple Regression – Minimal requirement is a ratio of 5 subjects:1 independent variable. The desired ratio is 15:1 • Multiple Correlations – For 5 or less predictors use n>50 – For 6 or more use 10 subjects per predictor • Logistic Regression – For stable models use 10-15 events per predictor variable
  43. 43. Rules of Thumb for Associations • Large samples are needed in: – Non-normal distribution – Small effect size – Substantial measurement error – Stepwise regression is used • For chi-square testing (2X2 table): – Enough sample size so that no cell <5 – Overall sample size should be at least 20
  44. 44. Rules of Thumb for Associations For Factor analysis – At least 50 participants/subjects per variable – Minimum 300 • • • • • N=50 very poor N=100 poor N=200 fair N=300 good N=500 very good
  45. 45. Software for calculations • • • • • • nQuery Advisor 2000 Power and Precision 1997 Pass 2000 UnifyPow 1998 Epi-Info: descriptive studies OpenEPI: descriptive studies
  46. 46. Thank you

×