Starting 15:00 UK/10:00 East Coast
 Head of Statistics
 nQuery Lead Researcher
 FDA Guest Speaker
 Guest Lecturer
Webinar Host
HOSTED BY:
Ronan
Fitzpatrick
Webinar Overview
Design Challenges Overview
Accounting for Design Constraints
Statistical Design Challenges
Value of Flexible Design
Worked Examples Overview
Two Means Group Sequential Trial
Unblinded SSR Example
Two Means Conditional Power
Two Means Blinded SSR
Mixed/Hierarchical Models
Posterior Error (Lee & Zelen)
SSR for Survival
Worked Examples
In 2018, 91% of organizations with clinical trials approved
by the FDA used nQuery for sample size and power calculation
About nQuery
PART 1 Design Challenges Overview
High Cost Low Success
Source: The Tufts Center for the Study of Drug Development
90% Failure $2.6 billion 10 Years
rate in drug
development
cost of drug
development
average timeline for
drug development
The pharmaceutical industry has a problem
R&D efficiency has never been under more pressure
All aspects of trial design should be reviewed – real-world choices
& data, study objective, statistical methods, study flexibility
Legislative push for innovative approaches (US: Innovative Cures
Act, EU: Adaptive Pathways) creates opening for wider dialogue
However, remember that costs also reflect high expectations from
public so cannot compromise on making correct inferences
Need stakeholder buy-in to mould trial towards “optimal” design
 Sponsor, regulator, clinicians/on-site expertise, statisticians, patients
How to deal with this challenge?
#2 Statistical/Analysis Challenges
#1 Design Constraints
e.g. randomization, stratification, hierarchical/multi-centre effects,
recruitment, blinding, ITT
Types of Challenge
e.g. endpoint/estimand?, statistical model, “success” criterion,
covariate adjustment, hypothesis type
e.g. interim analysis, adaptive changes (SSR, arm-selection etc.),
ad-hoc protocol changes
#3 Flexible Design
PART 2 Design Constraints
Real-world constraints greatly affect the design and statistical
choices appropriate for a study
In clinical trials, time and cost considerations encouraging global
trials and reliance on greater number of centres and CROs
This creates challenges but also opportunities such as greater
generalizability, sub-group analyses, design flexibility etc.
Designing w/ “first principles” important but consider issues early
 Especially true for pre-clinical/academic but still relevant for Phase III
Dealing with Design Constraints
Common issue in trials is accounting for effect of recruitment from
multiple “centres” (e.g. high N needed, real-world constraint)
Subject-level analysis follows level of randomization (e.g. d.f. for
effect) and random-effects (mixed modeling) used for centres
While hierarchical effect may be expected to increase required
sample size, in actuality can lead to the opposite
Vital to account for knock-on effect on design choices available
 Randomization level, fixed vs random effect, covariate approach
Multi-centre/Hierarchical Effects
“A cluster randomized design is employed to
reduce contamination. Using the cluster
randomized design with church as a cluster
unit, we will recruit a sample size of 32
churches (16 clusters per arm with 12
individuals per cluster) for an overall sample
of 384 individual participants. This number of
churches and participants achieves 91%
power to detect a difference of 2.5 kg
(Standard Deviation = 7) (approximately 4%
body weight loss) between the 2 groups’
mean body weight loss (effect size = 0.35)
from pre- to postintervention assessment
when the intra-cluster correlation is 0.01
using a linear model with a significance level
of 0.05.” Source: Medicine (2018)
Parameter Value
Significance Level (Two-Sided) 0.05
Mean Difference (kg) 2.5
Standard Deviation Observations 7
Intracluster Correlation 0.01
Clusters Per Group 16
Subjects per Cluster 12
PART 3 Statistical Considerations
Statistical choices have important effect on what trial
questions/hypotheses are of interest & how to answer them
In clinical trials, big issues exist around what “success” is (2.5% α)
and what to measure (estimands, “responder” analysis etc.)
However, advanced statistical methods are not an excuse for ill-judged
design, so try not get lost in the tyranny of small differences
Statistical models should follow design and implied assumptions
 For example, see previous part. Sample Size emergent from model
Statistical Considerations
Bayes in Clinical Trial Design
Bayesian methods continue to be of great interest in clinical trials
However, 2.5% Type I error seen as barrier in confirmatory trials
Several methods proposed in trial design and SSD to fulfil
Bayesian “success” as well as Type I error
Methods tend to deal with issues of prior uncertainty and the
inverse-conditional problem
Lee & Zelen derive Bayesian Type I/II errors for posterior chance
of H0/H1 being true given frequentist “success” (i.e. significance)
Argue this better reflects practical decision-making though note
difference between requirement of clinician vs. regulator
“Assuming a mean (±SD) number of
ventilator-free days of 12.7±10.6, we
estimated that a sample of 524
patients would need to be enrolled
in order for the study to have 80%
power, at a two-tailed significance
level of 0.05, to detect a mean
between-group difference of 2.6
ventilator-free days. On the basis of
data from the PAC-Man trial, we
estimated that the study-withdrawal
rate would be 3% and we therefore
calculated that the study required a
total of 540 patients.”
Source: NEJM (2014)
Parameter Value
Significance Level (Two- Sided) 0.05
Mean Difference 2.6
Standard Deviation 10.6
n per Group 262
Target Power 80%
“Assume the Z-test example but we
want a posterior Type I error of 0.05
(probability the null hypothesis is
true given significance) and posterior
Type II of 0.2 (probability the null
hypothesis is false given non-
significance). What would be
equivalent sample size and
frequentist planning errors (α, 1-β)
be assuming a range (0.25 - 0.75) of
prior beliefs against the null
hypothesis”
Parameter Value
α* 0.2
β* 0.05
Mean Difference 2.6
Standard Deviation 10.6
N per Group ?
Test Significance Level, α ?
Power, 1-β ?
(cont.)
PART 4 Flexible Design
Increasing interest in approaches for clinical trials which allow
greater flexibility to change trial while on-going
Ad-hoc changes to protocol for on-going trial needs significant
justification but will continue to be reality of trial conduct
Greater interest from sponsors and regulators in designs which
allow interim decisions and changes on a per-protocol basis
Up-front costs and limited scope may make some trials unfeasible
 Significant difference between ideal trial with hindsight and actual trial
Flexible Design
Adaptive Design
Adaptive designs are any design where a change or decision is
made to a trial while still on-going
Encompasses a wide variety of potential adaptions
 e.g. Early stopping, SSR, enrichment, seamless, dose-finding
Adaptive trials seek to give control to trialist to improve trial
based on all available information
Adaptive trials can decrease costs & better inferences though
require careful consideration to avoid bias or wasting resources
Adaptive Design Review
Advantages
1. Earlier Decisions
2. Reduced Potential Cost
3. Higher Potential Success
4. Greater Control
5. Better Seamless Designs
Disadvantages
1. More Complex
2. Logistical Issues
3. Modified Test
Statistics
4. Greater Expertise
5. Regulatory Approval
Regulatory Context
New FDA draft guidance published in October 2018 (PDUFA VI)
 EU (Adaptive Pathways), ICH E20 starts 2019
Far less categorical than 2010 draft
 Emphasizes early collaboration with FDA
 Focus on design issues and Type I error
 e.g. pre-specification, blinding, simulation
In-depth on certain adaptive designs
 SSR, enrichment, switching, multiple treats
 Also views on Bayesian and Complex
“Adaptive designs have the
potential to improve ... study
power and reduce
the sample size and total
cost" for investigational
drugs, including "targeted
medicines that are being put
into development today”
Scott Gottlieb (FDA Commissioner)
Sample Size Re-estimation (SSR)
Will focus here on specific adaptive design of SSR
Adaptive Trial focused on higher sample size if needed
 Obvious adaption target due to intrinsic SSD uncertainty
 Note that more suited to knowable/short follow-up
 Could also adaptively lower N but not encouraged
Two Primary Types: 1) Unblinded SSR; 2) Blinded SSR
 Differ on whether decision made on blinded data or not
 Both target different aspects of initial SSD uncertainty
Unblinded Sample Size Re-estimation
SSR suggested when interim effect size is “promising” (Chen et al)
 “Promising” user-defined but based on unblinded effect size
 Extends GSD with 3rd option: continue, stop early, increase N
Power for optimistic effect but increase N for lower relevant effects
 Updated FDA Guidance: Design which “can provided efficiency”
Common criteria proposed for unblinded SSR is conditional power (CP)
 Probability of significance given interim data (more detail on next slide)
2 methods here: Chen, DeMets & Lan; Cui, Hung & Wang
 1st uses GSD statistics but only penultimate look & high CP
 2nd uses weighted statistic but allowed at any look and CP
“Using an unstratified log-rank test at the one-
sided 2.5% significance level, a total of 282 events
would allow 92.6% power to demonstrate a 33%
risk reduction (hazard ratio for RAD/placebo of
about 0.67, as calculated from an anticipated 50%
increase in median PFS, from 6 months in placebo
arm to 9 months in the RAD001 arm). With a
uniform accrual of approximately 23 patients per
month over 74 weeks and a minimum follow up
of 39 weeks, a total of 352 patients would be
required to obtain 282 PFS events, assuming an
exponential progression-free survival distribution
with a median of 6 months in the Placebo arm
and of 9 months in RAD001 arm. With an
estimated 10% lost to follow up patients, a total
sample size of 392 patients should be
randomized.”
Source: nejm.org
Parameter Value
Significance Level (One-Sided) 0.025
Placebo Median Survival (months) 6
Everolimus Median Survival (months) 9
Hazard Ratio 0.66667
Accrual Period (Weeks) 74
Minimum Follow-Up (Weeks) 39
Power (%) 92.6
(cont.)
Assume fixed design but O’Brien-
Fleming efficacy and HSD (γ=-2) futility
with two equal interim analyses
Assume interim HR= 0.8 (from 0.666),
total E of 303 (interim E of 101 and
202) and final look alpha of 0.23.
What will required E for SSR for Chen-
Demets-Lan/Cui-Hung-Wang assuming
maximum events multiplier of 3?
Parameter Value
Nominal Final Look Sig. Level 0.0231
Initial HR 0.666…
Interim HR 0.8
Initial Expected Events (E) 303
Interim Events (2nd Look) 202
Maximum Events 909
Lower CP Bound (CDL/CHW) Derived/40%
Upper CP Bound 92.6%
PART 5 New nQuery Release | ver 8.4
nQuery Summer 2019 Update
Adds new tables across nQuery to help you with any trial design –
Classical, Bayesian & Adaptive
Hierarchical
Modelling
Interval Estimation
29
New Classical
Design Tables
Assurance
Posterior Error
5
New Bayesian
Design Tables
Blinded SSR
Unblinded SSR
Conditional Power
7
New Adaptive
Design Tables
Statsols.com/whats-new
Q&A
For further details,
email
info@statsols.com
Thanks for
listening!
Questions?
Statsols.com/trial
References
International Conference on Harmonisation of Technical Requirements for Registration of
Pharmaceuticals for Human Use (1997). General considerations for clinical trials E8.
Retrieved from
www.ich.org/fileadmin/Public_Web_Site/ICH_Products/Guidelines/Efficacy/E8/Step4/E8_Gu
ideline.pdf
Evans, S. R. (2010). Fundamentals of clinical trial design. Journal of experimental stroke &
translational medicine, 3(1), 19.
Lee, Y., & Nelder, J. A. (1996). Hierarchical generalized linear models. Journal of the Royal
Statistical Society: Series B (Methodological), 58(4), 619-656.
Ahn, C., Heo, M., & Zhang, S. (2014). Sample size calculations for clustered and longitudinal
outcomes in clinical research. Chapman and Hall/CRC.
McElfish, P. A., Long, C. R., Kaholokula, J. K. A., Aitaoto, N., Bursac, Z., Capelle, L., ... & Ayers,
B. L. (2018). Design of a comparative effectiveness randomized controlled trial testing a faith-
based Diabetes Prevention Program (WORD DPP) vs. a Pacific culturally adapted Diabetes
Prevention Program (PILI DPP) for Marshallese in the United States. Medicine, 97(19).
References
Lee, S. J., & Zelen, M. (2000). Clinical trials and sample size considerations: another
perspective. Statistical science, 15(2), 95-110.
McAuley, D. F., Laffey, J. G., O'Kane, C. M., Perkins, G. D., Mullan, B., Trinder, T. J., ... &
McNally, C. (2014). Simvastatin in the acute respiratory distress syndrome. New England
Journal of Medicine, 371(18), 1695-1703.
Jennison, C., & Turnbull, B. W. (1999). Group sequential methods with applications to clinical
trials. CRC Press.
Friede, T., & Kieser, M. (2006). Sample size recalculation in internal pilot study designs: a
review. Biometrical Journal: Journal of Mathematical Methods in Biosciences, 48(4), 537-555.
US Food and Drug Administration. (2018) Adaptive design clinical trials for drugs and
biologics (Draft guidance). Retrieved from https://www.fda.gov/media/78495/download
Chen, Y. J., DeMets, D. L., & Gordon Lan, K. K. (2004). Increasing the sample size when the
unblinded interim result is promising. Statistics in medicine, 23(7), 1023-1038.
References
Cui, L., Hung, H. J., & Wang, S. J. (1999). Modification of sample size in group sequential
clinical trials. Biometrics, 55(3), 853-857.
Mehta, C.R. and Pocock, S.J., (2011). Adaptive increase in sample size when interim results
are promising: a practical guide with examples. Statistics in medicine, 30(28), 3267-3284.
Liu, Y., & Lim, P. (2017). Sample size increase during a survival trial when interim results are
promising. Communications in Statistics-Theory and Methods, 46(14), 6846-6863.
Chen, Y. J., Li, C., & Lan, K. G. (2015). Sample size adjustment based on promising interim
results and its application in confirmatory clinical trials. Clinical Trials, 12(6), 584-595
Yao, J. C., Shah, M. H., Ito, T., Bohas, C. L., Wolin, E. M., Van Cutsem, E., ... & Tomassetti, P.
(2011). Everolimus for advanced pancreatic neuroendocrine tumors. New England Journal of
Medicine, 364(6), 514-523.

Innovative Strategies For Successful Trial Design - Webinar Slides

  • 1.
  • 2.
     Head ofStatistics  nQuery Lead Researcher  FDA Guest Speaker  Guest Lecturer Webinar Host HOSTED BY: Ronan Fitzpatrick
  • 3.
    Webinar Overview Design ChallengesOverview Accounting for Design Constraints Statistical Design Challenges Value of Flexible Design
  • 4.
    Worked Examples Overview TwoMeans Group Sequential Trial Unblinded SSR Example Two Means Conditional Power Two Means Blinded SSR Mixed/Hierarchical Models Posterior Error (Lee & Zelen) SSR for Survival Worked Examples
  • 5.
    In 2018, 91%of organizations with clinical trials approved by the FDA used nQuery for sample size and power calculation About nQuery
  • 6.
    PART 1 DesignChallenges Overview
  • 7.
    High Cost LowSuccess Source: The Tufts Center for the Study of Drug Development 90% Failure $2.6 billion 10 Years rate in drug development cost of drug development average timeline for drug development The pharmaceutical industry has a problem R&D efficiency has never been under more pressure
  • 8.
    All aspects oftrial design should be reviewed – real-world choices & data, study objective, statistical methods, study flexibility Legislative push for innovative approaches (US: Innovative Cures Act, EU: Adaptive Pathways) creates opening for wider dialogue However, remember that costs also reflect high expectations from public so cannot compromise on making correct inferences Need stakeholder buy-in to mould trial towards “optimal” design  Sponsor, regulator, clinicians/on-site expertise, statisticians, patients How to deal with this challenge?
  • 9.
    #2 Statistical/Analysis Challenges #1Design Constraints e.g. randomization, stratification, hierarchical/multi-centre effects, recruitment, blinding, ITT Types of Challenge e.g. endpoint/estimand?, statistical model, “success” criterion, covariate adjustment, hypothesis type e.g. interim analysis, adaptive changes (SSR, arm-selection etc.), ad-hoc protocol changes #3 Flexible Design
  • 10.
    PART 2 DesignConstraints
  • 11.
    Real-world constraints greatlyaffect the design and statistical choices appropriate for a study In clinical trials, time and cost considerations encouraging global trials and reliance on greater number of centres and CROs This creates challenges but also opportunities such as greater generalizability, sub-group analyses, design flexibility etc. Designing w/ “first principles” important but consider issues early  Especially true for pre-clinical/academic but still relevant for Phase III Dealing with Design Constraints
  • 12.
    Common issue intrials is accounting for effect of recruitment from multiple “centres” (e.g. high N needed, real-world constraint) Subject-level analysis follows level of randomization (e.g. d.f. for effect) and random-effects (mixed modeling) used for centres While hierarchical effect may be expected to increase required sample size, in actuality can lead to the opposite Vital to account for knock-on effect on design choices available  Randomization level, fixed vs random effect, covariate approach Multi-centre/Hierarchical Effects
  • 13.
    “A cluster randomizeddesign is employed to reduce contamination. Using the cluster randomized design with church as a cluster unit, we will recruit a sample size of 32 churches (16 clusters per arm with 12 individuals per cluster) for an overall sample of 384 individual participants. This number of churches and participants achieves 91% power to detect a difference of 2.5 kg (Standard Deviation = 7) (approximately 4% body weight loss) between the 2 groups’ mean body weight loss (effect size = 0.35) from pre- to postintervention assessment when the intra-cluster correlation is 0.01 using a linear model with a significance level of 0.05.” Source: Medicine (2018) Parameter Value Significance Level (Two-Sided) 0.05 Mean Difference (kg) 2.5 Standard Deviation Observations 7 Intracluster Correlation 0.01 Clusters Per Group 16 Subjects per Cluster 12
  • 14.
    PART 3 StatisticalConsiderations
  • 15.
    Statistical choices haveimportant effect on what trial questions/hypotheses are of interest & how to answer them In clinical trials, big issues exist around what “success” is (2.5% α) and what to measure (estimands, “responder” analysis etc.) However, advanced statistical methods are not an excuse for ill-judged design, so try not get lost in the tyranny of small differences Statistical models should follow design and implied assumptions  For example, see previous part. Sample Size emergent from model Statistical Considerations
  • 16.
    Bayes in ClinicalTrial Design Bayesian methods continue to be of great interest in clinical trials However, 2.5% Type I error seen as barrier in confirmatory trials Several methods proposed in trial design and SSD to fulfil Bayesian “success” as well as Type I error Methods tend to deal with issues of prior uncertainty and the inverse-conditional problem Lee & Zelen derive Bayesian Type I/II errors for posterior chance of H0/H1 being true given frequentist “success” (i.e. significance) Argue this better reflects practical decision-making though note difference between requirement of clinician vs. regulator
  • 17.
    “Assuming a mean(±SD) number of ventilator-free days of 12.7±10.6, we estimated that a sample of 524 patients would need to be enrolled in order for the study to have 80% power, at a two-tailed significance level of 0.05, to detect a mean between-group difference of 2.6 ventilator-free days. On the basis of data from the PAC-Man trial, we estimated that the study-withdrawal rate would be 3% and we therefore calculated that the study required a total of 540 patients.” Source: NEJM (2014) Parameter Value Significance Level (Two- Sided) 0.05 Mean Difference 2.6 Standard Deviation 10.6 n per Group 262 Target Power 80%
  • 18.
    “Assume the Z-testexample but we want a posterior Type I error of 0.05 (probability the null hypothesis is true given significance) and posterior Type II of 0.2 (probability the null hypothesis is false given non- significance). What would be equivalent sample size and frequentist planning errors (α, 1-β) be assuming a range (0.25 - 0.75) of prior beliefs against the null hypothesis” Parameter Value α* 0.2 β* 0.05 Mean Difference 2.6 Standard Deviation 10.6 N per Group ? Test Significance Level, α ? Power, 1-β ? (cont.)
  • 19.
  • 20.
    Increasing interest inapproaches for clinical trials which allow greater flexibility to change trial while on-going Ad-hoc changes to protocol for on-going trial needs significant justification but will continue to be reality of trial conduct Greater interest from sponsors and regulators in designs which allow interim decisions and changes on a per-protocol basis Up-front costs and limited scope may make some trials unfeasible  Significant difference between ideal trial with hindsight and actual trial Flexible Design
  • 21.
    Adaptive Design Adaptive designsare any design where a change or decision is made to a trial while still on-going Encompasses a wide variety of potential adaptions  e.g. Early stopping, SSR, enrichment, seamless, dose-finding Adaptive trials seek to give control to trialist to improve trial based on all available information Adaptive trials can decrease costs & better inferences though require careful consideration to avoid bias or wasting resources
  • 22.
    Adaptive Design Review Advantages 1.Earlier Decisions 2. Reduced Potential Cost 3. Higher Potential Success 4. Greater Control 5. Better Seamless Designs Disadvantages 1. More Complex 2. Logistical Issues 3. Modified Test Statistics 4. Greater Expertise 5. Regulatory Approval
  • 23.
    Regulatory Context New FDAdraft guidance published in October 2018 (PDUFA VI)  EU (Adaptive Pathways), ICH E20 starts 2019 Far less categorical than 2010 draft  Emphasizes early collaboration with FDA  Focus on design issues and Type I error  e.g. pre-specification, blinding, simulation In-depth on certain adaptive designs  SSR, enrichment, switching, multiple treats  Also views on Bayesian and Complex “Adaptive designs have the potential to improve ... study power and reduce the sample size and total cost" for investigational drugs, including "targeted medicines that are being put into development today” Scott Gottlieb (FDA Commissioner)
  • 24.
    Sample Size Re-estimation(SSR) Will focus here on specific adaptive design of SSR Adaptive Trial focused on higher sample size if needed  Obvious adaption target due to intrinsic SSD uncertainty  Note that more suited to knowable/short follow-up  Could also adaptively lower N but not encouraged Two Primary Types: 1) Unblinded SSR; 2) Blinded SSR  Differ on whether decision made on blinded data or not  Both target different aspects of initial SSD uncertainty
  • 25.
    Unblinded Sample SizeRe-estimation SSR suggested when interim effect size is “promising” (Chen et al)  “Promising” user-defined but based on unblinded effect size  Extends GSD with 3rd option: continue, stop early, increase N Power for optimistic effect but increase N for lower relevant effects  Updated FDA Guidance: Design which “can provided efficiency” Common criteria proposed for unblinded SSR is conditional power (CP)  Probability of significance given interim data (more detail on next slide) 2 methods here: Chen, DeMets & Lan; Cui, Hung & Wang  1st uses GSD statistics but only penultimate look & high CP  2nd uses weighted statistic but allowed at any look and CP
  • 26.
    “Using an unstratifiedlog-rank test at the one- sided 2.5% significance level, a total of 282 events would allow 92.6% power to demonstrate a 33% risk reduction (hazard ratio for RAD/placebo of about 0.67, as calculated from an anticipated 50% increase in median PFS, from 6 months in placebo arm to 9 months in the RAD001 arm). With a uniform accrual of approximately 23 patients per month over 74 weeks and a minimum follow up of 39 weeks, a total of 352 patients would be required to obtain 282 PFS events, assuming an exponential progression-free survival distribution with a median of 6 months in the Placebo arm and of 9 months in RAD001 arm. With an estimated 10% lost to follow up patients, a total sample size of 392 patients should be randomized.” Source: nejm.org Parameter Value Significance Level (One-Sided) 0.025 Placebo Median Survival (months) 6 Everolimus Median Survival (months) 9 Hazard Ratio 0.66667 Accrual Period (Weeks) 74 Minimum Follow-Up (Weeks) 39 Power (%) 92.6
  • 27.
    (cont.) Assume fixed designbut O’Brien- Fleming efficacy and HSD (γ=-2) futility with two equal interim analyses Assume interim HR= 0.8 (from 0.666), total E of 303 (interim E of 101 and 202) and final look alpha of 0.23. What will required E for SSR for Chen- Demets-Lan/Cui-Hung-Wang assuming maximum events multiplier of 3? Parameter Value Nominal Final Look Sig. Level 0.0231 Initial HR 0.666… Interim HR 0.8 Initial Expected Events (E) 303 Interim Events (2nd Look) 202 Maximum Events 909 Lower CP Bound (CDL/CHW) Derived/40% Upper CP Bound 92.6%
  • 28.
    PART 5 NewnQuery Release | ver 8.4
  • 29.
    nQuery Summer 2019Update Adds new tables across nQuery to help you with any trial design – Classical, Bayesian & Adaptive Hierarchical Modelling Interval Estimation 29 New Classical Design Tables Assurance Posterior Error 5 New Bayesian Design Tables Blinded SSR Unblinded SSR Conditional Power 7 New Adaptive Design Tables Statsols.com/whats-new
  • 30.
  • 32.
  • 33.
    References International Conference onHarmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use (1997). General considerations for clinical trials E8. Retrieved from www.ich.org/fileadmin/Public_Web_Site/ICH_Products/Guidelines/Efficacy/E8/Step4/E8_Gu ideline.pdf Evans, S. R. (2010). Fundamentals of clinical trial design. Journal of experimental stroke & translational medicine, 3(1), 19. Lee, Y., & Nelder, J. A. (1996). Hierarchical generalized linear models. Journal of the Royal Statistical Society: Series B (Methodological), 58(4), 619-656. Ahn, C., Heo, M., & Zhang, S. (2014). Sample size calculations for clustered and longitudinal outcomes in clinical research. Chapman and Hall/CRC. McElfish, P. A., Long, C. R., Kaholokula, J. K. A., Aitaoto, N., Bursac, Z., Capelle, L., ... & Ayers, B. L. (2018). Design of a comparative effectiveness randomized controlled trial testing a faith- based Diabetes Prevention Program (WORD DPP) vs. a Pacific culturally adapted Diabetes Prevention Program (PILI DPP) for Marshallese in the United States. Medicine, 97(19).
  • 34.
    References Lee, S. J.,& Zelen, M. (2000). Clinical trials and sample size considerations: another perspective. Statistical science, 15(2), 95-110. McAuley, D. F., Laffey, J. G., O'Kane, C. M., Perkins, G. D., Mullan, B., Trinder, T. J., ... & McNally, C. (2014). Simvastatin in the acute respiratory distress syndrome. New England Journal of Medicine, 371(18), 1695-1703. Jennison, C., & Turnbull, B. W. (1999). Group sequential methods with applications to clinical trials. CRC Press. Friede, T., & Kieser, M. (2006). Sample size recalculation in internal pilot study designs: a review. Biometrical Journal: Journal of Mathematical Methods in Biosciences, 48(4), 537-555. US Food and Drug Administration. (2018) Adaptive design clinical trials for drugs and biologics (Draft guidance). Retrieved from https://www.fda.gov/media/78495/download Chen, Y. J., DeMets, D. L., & Gordon Lan, K. K. (2004). Increasing the sample size when the unblinded interim result is promising. Statistics in medicine, 23(7), 1023-1038.
  • 35.
    References Cui, L., Hung,H. J., & Wang, S. J. (1999). Modification of sample size in group sequential clinical trials. Biometrics, 55(3), 853-857. Mehta, C.R. and Pocock, S.J., (2011). Adaptive increase in sample size when interim results are promising: a practical guide with examples. Statistics in medicine, 30(28), 3267-3284. Liu, Y., & Lim, P. (2017). Sample size increase during a survival trial when interim results are promising. Communications in Statistics-Theory and Methods, 46(14), 6846-6863. Chen, Y. J., Li, C., & Lan, K. G. (2015). Sample size adjustment based on promising interim results and its application in confirmatory clinical trials. Clinical Trials, 12(6), 584-595 Yao, J. C., Shah, M. H., Ito, T., Bohas, C. L., Wolin, E. M., Van Cutsem, E., ... & Tomassetti, P. (2011). Everolimus for advanced pancreatic neuroendocrine tumors. New England Journal of Medicine, 364(6), 514-523.

Editor's Notes

  • #2 www.Statsols.com/trial