2011 JSM - Good Statistical Practices

969 views

Published on

Published in: Business, Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
969
On SlideShare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
0
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • JSM 2007
  • 2011 JSM - Good Statistical Practices

    1. 1. JSM 2011 Good Statistical Practice for Medical Device Trial Submission Huyuan Yang † , Matthew Rousseau* , H. Terry Liao * P , and Peter Lam* † Millennium Pharmaceutical * Boston Scientific Corporation P Presenter The view points of this presentation are from the authors and not from the companies we represent
    2. 2. Outline <ul><li>Key elements in clinical trials (RCT) </li></ul><ul><li>Sample size justification </li></ul><ul><li>Case examples </li></ul><ul><li>Missing data </li></ul><ul><li>Sample size adjustment </li></ul><ul><li>Interim data peek </li></ul><ul><li>Site poolability justification </li></ul>
    3. 3. Key Elements in Trial Design <ul><li>Goals (RCT): </li></ul><ul><li>to ensure the study groups are like-to-like in baseline comparison </li></ul><ul><li>to distinguish the treatment effect from other influences, such as the disease progression, placebo/sham effect, or biased observations </li></ul><ul><li>Achieved by </li></ul><ul><li>Randomization </li></ul><ul><li>Control (placebo/sham/active) </li></ul><ul><li>Blinding (patient/evaluator) </li></ul><ul><li>Design protocol to minimize protocol deviations, patient withdrawals, and missing data </li></ul>JSM 2011
    4. 4. Sample Size Calculation <ul><li>Estimate hypothesized expected rates: Test vs Control </li></ul><ul><li>Use published data to estimate the Control rate using forest plots </li></ul><ul><li>Minimum clinically meaningful benefit for the Test rate </li></ul><ul><li>Alpha of 5% and power ≥ 80% </li></ul><ul><li>Provide sample sizes based on various assumptions </li></ul>JSM 2011
    5. 5. Case Example 1: Superiority RCT <ul><li>Superiority trial design </li></ul><ul><li>Control event rate: 10% ̶ 11% </li></ul><ul><li>Test event rate: 5% ̶ 6% </li></ul><ul><li>2-sided alpha: 5% </li></ul><ul><li>Power: 90% </li></ul>JSM 2011
    6. 6. Estimate Sample Size Range JSM 2011 N per group using a 2-sided 5% alpha and 90% power Test Rate Control Rate 10% 10.25% 10.5% 10.75% 11% 5% 582 535 495 460 428 5.25% 654 599 552 510 473 5.5% 740 674 617 568 524 5.75% 842 762 694 635 583 6% 965 867 784 713 652
    7. 7. Estimate Power Range JSM 2011 Assessing power using a fixed N of 617/group Test Rate Control Rate 10% 10.25% 10.5% 10.75% 11% 5% 91% 93% 95% 96% 97% 5.25% 89% 90% 92% 94% 95% 5.5% 84% 87% 90% 92% 94% 5.75% 79% 83% 86% 89% 91% 6% 73% 78% 82% 85% 88%
    8. 8. <ul><li>NI design of a test with equivalent performance to an approval device </li></ul><ul><li>Expected event rate for both test/control is 26.4% </li></ul><ul><li>NI margin is 7.5% </li></ul><ul><li>Preliminary N exercise using Wald z-test </li></ul><ul><li>One-sided 5% alpha </li></ul><ul><li>90% power </li></ul><ul><li>Required N per group is 592 patients </li></ul>Case Example 2: Non-inferiority RCT JSM 2011
    9. 9. Estimate Power Range JSM 2011 Assessing power using a fixed N of 592 per group with a NI margin of 7.5% Control Rate Test Rate 24% 25% 26% 27% 28% 24% 91% 83% 70% 55% 39% 25% 96% 90% 82% 69% 54% 26% 98% 95% 90% 81% 68% 27% 99% 98% 95% 89% 80% 28% 99% 99% 97% 94% 89%
    10. 10. Sample Size Calculation <ul><li>Recommendation: </li></ul><ul><li>Use the best ‘objective’ estimate of the expected control rate from published data </li></ul><ul><li>Determine the minimum clinically meaningful difference with the medical team </li></ul><ul><li>Start with 90% power </li></ul><ul><li>Calculate N based on point estimates </li></ul><ul><li>Assess the power with the fixed N using different expected rates </li></ul><ul><li>For slow enrolling trials with large N, consider adaptive design (with operational challenges) </li></ul>JSM 2011
    11. 11. Missing Data <ul><li>Missing data are unrecorded values that, if recorded, would be meaningful for analysis </li></ul><ul><li>Attrition – problematic! </li></ul><ul><li>Low attrition rate reflects the nature of clinical trials, such as patients move, consent withdrawal, ... </li></ul><ul><li>High attrition rate implies poor trial design or conduct, which leads to loss to follow up, images not analyzable, lab test not taken, excess QOL measures, etc </li></ul><ul><li>Out of visit window? Early vs. Late </li></ul><ul><li>Patient death ? </li></ul>JSM 2011
    12. 12. NAS report ‘Handling of Missing Data in Clinical Trials’ <ul><li>Focus is on confirmatory RCT </li></ul><ul><li>Need to distinguish between </li></ul><ul><li>1. Treatment dropouts: Treatment-specific outcomes not recorded when participants go off-protocol, i.e. discontinue their assigned treatments (lack of efficacy or tolerability) </li></ul><ul><li>2. Analysis dropouts: Missing data arising from inability to record outcomes, e.g. from missed clinic visits, loss to follow up </li></ul>JSM 2011
    13. 13. Missing Data Handling <ul><li>If possible, document on CRF the primary reason why patient missed clinical visit </li></ul><ul><li>Assess if the total and the reasons of missing are balanced between treatment groups in RCT </li></ul><ul><li>Sensitivity analyses of missing data </li></ul>JSM 2011
    14. 14. Sensitivity Analyses <ul><li>Objective: To explore the missingness by testing the potential range of the missing value and response based on different assumptions </li></ul><ul><li>Pre-specify the missing data sensitivity methods in the protocol or SAP: </li></ul><ul><li>1. Complete-case analysis </li></ul><ul><li>2. Single imputation: including LOCF, Worst Case, Q3_Q1 Analyses </li></ul><ul><li>3. Inverse probability-weighted method </li></ul><ul><li>4. Likelihood based methods: maximum likelihood, Bayes, Multiple Imputation </li></ul><ul><li>5. Tipping point analysis </li></ul>JSM 2011
    15. 15. Sample Size Adjustment <ul><li>Adaptive design for sample size adjustment must be documented in the protocol and details be pre-specified in the charter for an independent group to perform the task </li></ul><ul><li>Efficient adaptive design software for ‘optimal’ trial sample sizes is available (Cytel’s commercial software of EAST and upcoming adaptation trial design software) </li></ul>JSM 2011
    16. 16. Interim Data Peek <ul><li>No unplanned interim analyses for any reason </li></ul><ul><li>Any unplanned interim summary looks and who reviews the interim data should be documented </li></ul><ul><li>The safety surveillance group reviews the aggregate safety data on a regular basis, per a safety monitoring plan </li></ul><ul><li>The Data Monitoring Committee (DMC) reviews the unblinded masked summary data, per DMC charter </li></ul>JSM 2011
    17. 17. Multiplicity <ul><li>Primary efficacy endpoint </li></ul><ul><li>Primary safety endpoint </li></ul><ul><li>Multiple secondary endpoints </li></ul><ul><li>Concerns for inflated experimental type I error with multiple testings if no alpha adjustment </li></ul><ul><li>Solution: use gate keeping strategy to control the overall type I error </li></ul>JSM 2011
    18. 18. Site Pooling <ul><li>sites with <20 patients enrolled will be combined into “virtual sites” based on geographic region </li></ul><ul><li>“ virtual sites” have ≥20 patients but no more than the largest enrolling site </li></ul><ul><li>Geographic region: </li></ul><ul><li>1) US: NE, SE, Central, NW, SW regions </li></ul><ul><li>2) US: each state, neighboring states </li></ul><ul><li>3) EU: UK, Western, Central, Eastern, Russia </li></ul><ul><li>4) EU: each EU country, neighboring countries </li></ul><ul><li>5) Global: US, Canada, South America, EU, Asia, Australia/NZ </li></ul>JSM 2011
    19. 19. Poolability Across Sites <ul><li>Logistic regression for binary outcome and ANOVA for continuous outcome </li></ul><ul><li>Statistical model: 1 o Endpoint = Site Treatment Site*Treatment </li></ul><ul><li>Pooling across sites is justified if treatment-by-site interaction is statistically insignificance, ie, treatment effects (the difference in event rates between two treatment groups) are not different across sites </li></ul>JSM 2011
    20. 20. Poolablility Not Justified <ul><li>Identify the ‘influential’ site(s) causing the problem </li></ul><ul><li>Investigate the poolability by excluding one site (2 sites or more) at a time </li></ul><ul><li>Provide the ‘influential’ site(s) to the clinical team for further investigation </li></ul>JSM 2011
    21. 21. Summary <ul><li>3 key elements of a clinical trial: random, control, and blinding </li></ul><ul><li>Optimize the power/sample size based on a range of expected test/control rates </li></ul><ul><li>Design the protocol to minimize protocol deviation and to avoid missing data </li></ul><ul><li>Need to pre-specify missing data sensitivity analysis, m ultiplicity , sample size adjustment, interim data looks in the protocol </li></ul><ul><li>Site poolability justification – important as more sites are recruited to shorten the enrollment time </li></ul>
    22. 22. reference <ul><li>ICH E9 Guideline: Statistical Principles for Clinical Trials (Feb 1998); </li></ul><ul><li>www.ich.org/product/guidelines/efficacy/articles/efficacy-guidelines.html </li></ul><ul><li>Little, Handling of Missing Data in Clinical Trials - Report of an NAS Panel, AdvaMed/FDA Statistical Workshop, April 2011 </li></ul><ul><li>Sherry Yan, Some Missing Data Handling in Medical Device Clinical Trials; JSM 2009 </li></ul>JSM 2011

    ×