Optimizing Oncology Trial Design - FAQs & Common Issues
In this free webinar you will learn about
Endpoints
Models
Covariates, stratification and censoring issues
Adaptive design - complications and opportunities
Sample size determination
& more
In this free webinar we will offer guidance on how to optimize your oncology trial design. Specifically, we will examine the frequently asked questions and common issues that arise.
https://www.statsols.com/webinar/optimizing-oncology-trial-design
Sample size for survival analysis - a guide to planning successful clinical t...nQuery
Determining the appropriate number of events needed for survival analysis is a complex task as study planners try to predict what sample size will be needed after accounting for the complications of unequal follow-up, drop-out and treatment crossover.
The statistical, logistical and ethical considerations all complicate life for biostatisticians as issues to balance in planning a survival analysis. However, this complexity has created a need for new analyses and procedures to help the planning process for survival analysis trials.
The wider move from fixed to flexible designs has opened up opportunities for advanced methods such as adaptive design and Bayesian analysis to help deal with the unique complications of planning for survival data but these methods have their own complications that need to be explored too.
Webinar slides- alternatives to the p-value and power nQuery
What are the alternatives to the p-value & power? What is the next step for sample size determination? We will explore these issues in this free webinar presented by nQuery
Innovative Sample Size Methods For Clinical Trials nQuery
"Innovative Sample Size Methods for Clinical Trials" is hosted to coincide with the Spring 2018 update to nQuery - The leading Sample Size Software.
Hosted by Ronan Fitzpatrick - Head of Statistics and nQuery Lead Researcher at Statsols - you'll learn about the benefits of a range of procedures and how you can implement them into your work:
1) Dose-escalation with the Bayesian Continual Reassessment Method
CRM is a growing alternative to the 3+3 method for Phase I trials finding the Maximum Tolerated Dose (MTD).
See how researchers can overcome 3+3 drawbacks to easily find the required sample size for this beneficial alternative for finding the MTD.
2) Bayesian Assurance with Survival Example
This Bayesian alternative to power has experienced a rapid rise in interest and application from researchers.
See how Assurance is being used by researchers to discover the true “probability of success” of a trial.
3) Mendelian Randomization
Mendelian randomization (MR) is a method that allows testing of a causal effect from observational data in the presence of confounding factors.
However, in order to design efficient Mendelian randomization studies, it is essential to calculate the appropriate sample sizes required. We demonstrate what to do to achieve this.
4) Negative Binomial Distribution
Negative binomial model has been increasingly used to model the count data. One of the challenges of applying negative binomial model in clinical trial design is the sample size estimation.
We demonstrate how best to determine the appropriate sample size in the presence of challenges such as unequal follow-up or dispersion.
2020 trends in biostatistics what you should know about study design - slid...nQuery
2020 Trends In Biostatistics - What you should know about study design.
In this free webinar you will learn about:
-Adaptive designs in confirmatory trials
-Using external data in study planning
-Innovative designs in early-stage trials
To watch the full webinar:
https://www.statsols.com/webinar/2020-trends-in-biostatistics-what-you-should-know-about-study-design
Designing studies with recurrent events | Model choices, pitfalls and group s...nQuery
In this free webinar, we will examine the important design considerations for analyzing recurring events and counts.
Watch the webinar at: https://www.statsols.com/en/webinar/designing-studies-with-recurrent-events
Designing studies with recurrent events (Model choices, pitfalls and group sequential design)
Innovative sample size methods for adaptive clinical trials webinar web ver...nQuery
View the video here:
https://www.statsols.com/webinar/innovative-sample-size-methods-for-adaptive-clinical-trials
Given the high failure rates and the increased costs of clinical trials, researchers need innovative design strategies to best optimize financial resources and reduce the risk to patients.
Adaptive designs are emerging as a way to reduce risk and cost associated with clinical trials. The FDA recently published guidance (Innovative Cures Act) and are actively encouraging sponsors to use Adaptive trials.
Adaptive design is a clinical trial design that allows adaptations or modifications to aspects of the trial after its initiation without undermining the validity and integrity of the trial.
In this webinar, Ronan will demonstrate nQuery's new Adaptive module focusing on Sample Size Re-Estimation & Group-Sequential Design.
In this webinar you will learn about:
The pros and cons of adaptive designs
Sample Size Re-Estimation
Group-Sequential Design
Conditional Power
Predictive Power
Webinar slides how to reduce sample size ethically and responsiblynQuery
[Webinar] How to reduce sample size...ethically and responsibly | In this free webinar, you will learn various design strategies to help reduce the sample size of your study in an ethical and responsible manner. Practical examples will be used throughout.
Sample size for survival analysis - a guide to planning successful clinical t...nQuery
Determining the appropriate number of events needed for survival analysis is a complex task as study planners try to predict what sample size will be needed after accounting for the complications of unequal follow-up, drop-out and treatment crossover.
The statistical, logistical and ethical considerations all complicate life for biostatisticians as issues to balance in planning a survival analysis. However, this complexity has created a need for new analyses and procedures to help the planning process for survival analysis trials.
The wider move from fixed to flexible designs has opened up opportunities for advanced methods such as adaptive design and Bayesian analysis to help deal with the unique complications of planning for survival data but these methods have their own complications that need to be explored too.
Webinar slides- alternatives to the p-value and power nQuery
What are the alternatives to the p-value & power? What is the next step for sample size determination? We will explore these issues in this free webinar presented by nQuery
Innovative Sample Size Methods For Clinical Trials nQuery
"Innovative Sample Size Methods for Clinical Trials" is hosted to coincide with the Spring 2018 update to nQuery - The leading Sample Size Software.
Hosted by Ronan Fitzpatrick - Head of Statistics and nQuery Lead Researcher at Statsols - you'll learn about the benefits of a range of procedures and how you can implement them into your work:
1) Dose-escalation with the Bayesian Continual Reassessment Method
CRM is a growing alternative to the 3+3 method for Phase I trials finding the Maximum Tolerated Dose (MTD).
See how researchers can overcome 3+3 drawbacks to easily find the required sample size for this beneficial alternative for finding the MTD.
2) Bayesian Assurance with Survival Example
This Bayesian alternative to power has experienced a rapid rise in interest and application from researchers.
See how Assurance is being used by researchers to discover the true “probability of success” of a trial.
3) Mendelian Randomization
Mendelian randomization (MR) is a method that allows testing of a causal effect from observational data in the presence of confounding factors.
However, in order to design efficient Mendelian randomization studies, it is essential to calculate the appropriate sample sizes required. We demonstrate what to do to achieve this.
4) Negative Binomial Distribution
Negative binomial model has been increasingly used to model the count data. One of the challenges of applying negative binomial model in clinical trial design is the sample size estimation.
We demonstrate how best to determine the appropriate sample size in the presence of challenges such as unequal follow-up or dispersion.
2020 trends in biostatistics what you should know about study design - slid...nQuery
2020 Trends In Biostatistics - What you should know about study design.
In this free webinar you will learn about:
-Adaptive designs in confirmatory trials
-Using external data in study planning
-Innovative designs in early-stage trials
To watch the full webinar:
https://www.statsols.com/webinar/2020-trends-in-biostatistics-what-you-should-know-about-study-design
Designing studies with recurrent events | Model choices, pitfalls and group s...nQuery
In this free webinar, we will examine the important design considerations for analyzing recurring events and counts.
Watch the webinar at: https://www.statsols.com/en/webinar/designing-studies-with-recurrent-events
Designing studies with recurrent events (Model choices, pitfalls and group sequential design)
Innovative sample size methods for adaptive clinical trials webinar web ver...nQuery
View the video here:
https://www.statsols.com/webinar/innovative-sample-size-methods-for-adaptive-clinical-trials
Given the high failure rates and the increased costs of clinical trials, researchers need innovative design strategies to best optimize financial resources and reduce the risk to patients.
Adaptive designs are emerging as a way to reduce risk and cost associated with clinical trials. The FDA recently published guidance (Innovative Cures Act) and are actively encouraging sponsors to use Adaptive trials.
Adaptive design is a clinical trial design that allows adaptations or modifications to aspects of the trial after its initiation without undermining the validity and integrity of the trial.
In this webinar, Ronan will demonstrate nQuery's new Adaptive module focusing on Sample Size Re-Estimation & Group-Sequential Design.
In this webinar you will learn about:
The pros and cons of adaptive designs
Sample Size Re-Estimation
Group-Sequential Design
Conditional Power
Predictive Power
Webinar slides how to reduce sample size ethically and responsiblynQuery
[Webinar] How to reduce sample size...ethically and responsibly | In this free webinar, you will learn various design strategies to help reduce the sample size of your study in an ethical and responsible manner. Practical examples will be used throughout.
Non-inferiority and Equivalence Study design considerations and sample sizenQuery
About the webinar
This webinar examines the role of non-inferiority and equivalence in study design
In this free webinar, you will learn about:
-Regulatory information on this type of study design
-Considerations for study design and your sample size
-Practical worked examples of
--Non-inferiority Testing
--Equivalence Testing
Duration - 60 minutes
Speaker: Ronan Fitzpatrick, Head of Statistics, Statsols
Watch the video at: https://www.statsols.com/webinars
Webinar slides sample size for survival analysis - a guide to planning succ...nQuery
Determining the appropriate number of events needed for survival analysis is a complex task as study planners try to predict what sample size will be needed after accounting for the complications of unequal follow-up, drop-out and treatment crossover.
The statistical, logistical and ethical considerations all complicate life for biostatisticians as issues to balance in planning a survival analysis. However, this complexity has created a need for new analyses and procedures to help the planning process for survival analysis trials.
The wider move from fixed to flexible designs has opened up opportunities for advanced methods such as adaptive design and Bayesian analysis to help deal with the unique complications of planning for survival data but these methods have their own complications that need to be explored too.
Extending A Trial’s Design Case Studies Of Dealing With Study Design IssuesnQuery
About the webinar
As trials increase in complexity and scope, there is a requirement for trial designs to reflect this.
From dealing with non-proportional hazards in survival analysis to dealing with cluster randomization, we examine how to deal with study design issues of complex trials.
In this free webinar, you will learn about:
Dealing with study design issues
Practical worked examples of
Non-proportional Hazards
Cluster Randomization
Three Armed Trials
Non-proportional Hazards
Non-proportional hazards and complex survival curves have become of increasing interest, due to being commonly seen in immunotherapy development. This has led to interest in assessing the robustness of standard methods and alternative methods that better adapt to deviations.
In this webinar, we look at methods proposed for complex survival curves and the weighted log-rank test as a candidate model to deal with a delayed survival effect.
Cluster Randomization
Cluster-randomized designs are often adopted when there is a high risk of contamination if cluster members were randomized individually. Stepped-wedge designs are useful in cases where it is difficult to apply a particular treatment to half of the clusters at the same time.
In this webinar, we introduce cluster randomization and stepped-wedge designs to provide an insight into the requirements of more complex randomization schedules.
Three Armed Trials
Non-inferiority testing is a common hypothesis test in the development of generic medicine and medical devices. The most common design compares the proposed non-inferior treatment to the standard treatment alone but this leaves uncertain if the treatment effect is the same as from previous studies. This “assay sensitivity” problem can be resolved by using a three arm trial which includes placebo alongside the new and reference treatments for direct comparison.
In this webinar we show a complete testing approach to this gold standard design and how to find the appropriate allocation and sample size for this study.
Duration - 60 minutes
Speaker: Ronan Fitzpatrick, Head of Statistics, Statsols
Bayesian Assurance: Formalizing Sensitivity Analysis For Sample SizenQuery
Title: Bayesian Assurance: Formalizing Sensitivity Analysis For Sample Size
Duration: 60 minutes
Speaker: Ronan Fitzpatrick, Head of Statistics, Statsols
Watch Here: http://bit.ly/2ndRG4B
In this webinar you’ll learn about:
Benefits of Sensitivity Analysis: What does the researcher gain by conducting a sensitivity analysis?
Why isn't Sensitivity Analysis formalized: Why does sensitivity analysis still lack the type of formalized rules and grounding to make it a routine part of sample size determination in every field?
How Bayesian Assurance works: Using Bayesian Assurance provides key contextual information on what is likely to happen over the total range possible values rather than the small number of fixed points used in a sensitivity analysis
Elicitation & SHELF: How expert opinion is elicited and then how to integrate these opinions with each other plus prior data using the Sheffield Elicitation Framework (SHELF)
Why use in both Frequentist or Bayesian analysis: How and why these methods can be used for studies which will use Frequentist or Bayesian methods in their final analysis
Plus more
Bayesian Approaches To Improve Sample Size WebinarnQuery
Title: Bayesian Approaches To Improve Sample Size
Duration: 60 minutes
Speaker: Ronan Fitzpatrick, Head of Statistics, Statsols
In this webinar you'll learn about:
Bayesian Sample Size Determination: See how the growth of Bayesian analysis has helped transform our ideas about statistical inference and methodologies in clinical trials
Bayesian Assurance: Get an informative answer on how likely it is to see a “positive” outcome from the trial and then make better decisions on what trials to back
Posterior Credible Intervals and Mixed Bayesian Likelihood: Enable researchers to use prior information from pilot studies and other sources to make quicker and better decisions
Plus much more
Clinical Trial Simulation to Evaluate the Pharmacokinetics of an Abuse-Deterr...Loan Pham
A CTS was performed to estimate the sample size and optimal plasma sampling times that sufficiently characterize the PK parameters (CL, Vd) from a single dose of an abuse-deterrent (AD) opioid in pediatric subjects.
Cluster randomised trials with excessive cluster sizes: ethical and design im...Karla hemming
Investigators submitting funding applications strive for nominal levels of power to ensure their applications are competitive. If the number of clusters is limited this might mean large clusters are needed to achieve that power; but a slightly lower power might be achievable with a drastic reduction in cluster sizes. Alternatively, increasing the number of clusters minimally might mean the desired level of power is achievable, again with a drastic reduction in cluster sizes.
About the webinar
Flexible Clinical Trial Design | Survival, Stepped-Wedge & MAMS Designs
As clinical trials increase in complexity, the requirement is for trial designs to adapt to these complications.
From dealing with non-proportional hazards in survival analysis to creating seamless Phase II/III clinical trials, it is an exciting time to be involved in clinical trial design and analysis.
In this free webinar, we will explore a select few topics that highlight the additional flexibility available when designing modern clinical trials.
In this free webinar you will learn about:
Flexible Survival Analysis Designs
Non-proportional hazards and other complex survival curves have become of increasing interest, due to being commonly seen in immunotherapy development. This has led to interest in assessing the robustness of standard methods and alternative methods that better adapt to deviations.
In this webinar, we will look at power analysis assuming complex survival curves and the weighted log-rank test as one candidate model to deal with a delayed survival effect.
Stepped-Wedge designs
Cluster-randomized designs are often adopted when there is a high risk of contamination if cluster members were randomized individually. Stepped-wedge designs are useful in cases where it is difficult to apply a particular treatment to half of the clusters at the same time.
In this webinar, we will introduce stepped-wedge designs and provide an insight into the more complex, flexible randomization schedules available.
Multi-Arm Multi-Stage (MAMS)
MAMs designs provide the ability to assess more treatments in less time than could be done with a series of two-arm trials and can offer smaller sample size requirements when compared to that required for the equivalent number of two-arm trials.
In this webinar, we will look at the design of a Group Sequential MAMS design and explore its design requirements.
Duration - 60 minutes
Speaker: Ronan Fitzpatrick, Head of Statistics, Statsols
For more webinars check out https://www.statsols.com/webinars
Innovative Strategies For Successful Trial Design - Webinar SlidesnQuery
Full webinar available here: https://www.statsols.com/webinar/innovative-strategies-for-successful-trial-design
[Webinar] Innovative Strategies For Successful Trial Design- In this free webinar, you will learn about:
- The challenges facing your trials
- How to calculate the correct sample size
- Worked examples including Mixed/Hierarchical Models
- Posterior Error
- Adaptive Designs For Survival
www.statsols.com
Non-inferiority and Equivalence Study design considerations and sample sizenQuery
About the webinar
This webinar examines the role of non-inferiority and equivalence in study design
In this free webinar, you will learn about:
-Regulatory information on this type of study design
-Considerations for study design and your sample size
-Practical worked examples of
--Non-inferiority Testing
--Equivalence Testing
Duration - 60 minutes
Speaker: Ronan Fitzpatrick, Head of Statistics, Statsols
Watch the video at: https://www.statsols.com/webinars
Webinar slides sample size for survival analysis - a guide to planning succ...nQuery
Determining the appropriate number of events needed for survival analysis is a complex task as study planners try to predict what sample size will be needed after accounting for the complications of unequal follow-up, drop-out and treatment crossover.
The statistical, logistical and ethical considerations all complicate life for biostatisticians as issues to balance in planning a survival analysis. However, this complexity has created a need for new analyses and procedures to help the planning process for survival analysis trials.
The wider move from fixed to flexible designs has opened up opportunities for advanced methods such as adaptive design and Bayesian analysis to help deal with the unique complications of planning for survival data but these methods have their own complications that need to be explored too.
Extending A Trial’s Design Case Studies Of Dealing With Study Design IssuesnQuery
About the webinar
As trials increase in complexity and scope, there is a requirement for trial designs to reflect this.
From dealing with non-proportional hazards in survival analysis to dealing with cluster randomization, we examine how to deal with study design issues of complex trials.
In this free webinar, you will learn about:
Dealing with study design issues
Practical worked examples of
Non-proportional Hazards
Cluster Randomization
Three Armed Trials
Non-proportional Hazards
Non-proportional hazards and complex survival curves have become of increasing interest, due to being commonly seen in immunotherapy development. This has led to interest in assessing the robustness of standard methods and alternative methods that better adapt to deviations.
In this webinar, we look at methods proposed for complex survival curves and the weighted log-rank test as a candidate model to deal with a delayed survival effect.
Cluster Randomization
Cluster-randomized designs are often adopted when there is a high risk of contamination if cluster members were randomized individually. Stepped-wedge designs are useful in cases where it is difficult to apply a particular treatment to half of the clusters at the same time.
In this webinar, we introduce cluster randomization and stepped-wedge designs to provide an insight into the requirements of more complex randomization schedules.
Three Armed Trials
Non-inferiority testing is a common hypothesis test in the development of generic medicine and medical devices. The most common design compares the proposed non-inferior treatment to the standard treatment alone but this leaves uncertain if the treatment effect is the same as from previous studies. This “assay sensitivity” problem can be resolved by using a three arm trial which includes placebo alongside the new and reference treatments for direct comparison.
In this webinar we show a complete testing approach to this gold standard design and how to find the appropriate allocation and sample size for this study.
Duration - 60 minutes
Speaker: Ronan Fitzpatrick, Head of Statistics, Statsols
Bayesian Assurance: Formalizing Sensitivity Analysis For Sample SizenQuery
Title: Bayesian Assurance: Formalizing Sensitivity Analysis For Sample Size
Duration: 60 minutes
Speaker: Ronan Fitzpatrick, Head of Statistics, Statsols
Watch Here: http://bit.ly/2ndRG4B
In this webinar you’ll learn about:
Benefits of Sensitivity Analysis: What does the researcher gain by conducting a sensitivity analysis?
Why isn't Sensitivity Analysis formalized: Why does sensitivity analysis still lack the type of formalized rules and grounding to make it a routine part of sample size determination in every field?
How Bayesian Assurance works: Using Bayesian Assurance provides key contextual information on what is likely to happen over the total range possible values rather than the small number of fixed points used in a sensitivity analysis
Elicitation & SHELF: How expert opinion is elicited and then how to integrate these opinions with each other plus prior data using the Sheffield Elicitation Framework (SHELF)
Why use in both Frequentist or Bayesian analysis: How and why these methods can be used for studies which will use Frequentist or Bayesian methods in their final analysis
Plus more
Bayesian Approaches To Improve Sample Size WebinarnQuery
Title: Bayesian Approaches To Improve Sample Size
Duration: 60 minutes
Speaker: Ronan Fitzpatrick, Head of Statistics, Statsols
In this webinar you'll learn about:
Bayesian Sample Size Determination: See how the growth of Bayesian analysis has helped transform our ideas about statistical inference and methodologies in clinical trials
Bayesian Assurance: Get an informative answer on how likely it is to see a “positive” outcome from the trial and then make better decisions on what trials to back
Posterior Credible Intervals and Mixed Bayesian Likelihood: Enable researchers to use prior information from pilot studies and other sources to make quicker and better decisions
Plus much more
Clinical Trial Simulation to Evaluate the Pharmacokinetics of an Abuse-Deterr...Loan Pham
A CTS was performed to estimate the sample size and optimal plasma sampling times that sufficiently characterize the PK parameters (CL, Vd) from a single dose of an abuse-deterrent (AD) opioid in pediatric subjects.
Cluster randomised trials with excessive cluster sizes: ethical and design im...Karla hemming
Investigators submitting funding applications strive for nominal levels of power to ensure their applications are competitive. If the number of clusters is limited this might mean large clusters are needed to achieve that power; but a slightly lower power might be achievable with a drastic reduction in cluster sizes. Alternatively, increasing the number of clusters minimally might mean the desired level of power is achievable, again with a drastic reduction in cluster sizes.
About the webinar
Flexible Clinical Trial Design | Survival, Stepped-Wedge & MAMS Designs
As clinical trials increase in complexity, the requirement is for trial designs to adapt to these complications.
From dealing with non-proportional hazards in survival analysis to creating seamless Phase II/III clinical trials, it is an exciting time to be involved in clinical trial design and analysis.
In this free webinar, we will explore a select few topics that highlight the additional flexibility available when designing modern clinical trials.
In this free webinar you will learn about:
Flexible Survival Analysis Designs
Non-proportional hazards and other complex survival curves have become of increasing interest, due to being commonly seen in immunotherapy development. This has led to interest in assessing the robustness of standard methods and alternative methods that better adapt to deviations.
In this webinar, we will look at power analysis assuming complex survival curves and the weighted log-rank test as one candidate model to deal with a delayed survival effect.
Stepped-Wedge designs
Cluster-randomized designs are often adopted when there is a high risk of contamination if cluster members were randomized individually. Stepped-wedge designs are useful in cases where it is difficult to apply a particular treatment to half of the clusters at the same time.
In this webinar, we will introduce stepped-wedge designs and provide an insight into the more complex, flexible randomization schedules available.
Multi-Arm Multi-Stage (MAMS)
MAMs designs provide the ability to assess more treatments in less time than could be done with a series of two-arm trials and can offer smaller sample size requirements when compared to that required for the equivalent number of two-arm trials.
In this webinar, we will look at the design of a Group Sequential MAMS design and explore its design requirements.
Duration - 60 minutes
Speaker: Ronan Fitzpatrick, Head of Statistics, Statsols
For more webinars check out https://www.statsols.com/webinars
Innovative Strategies For Successful Trial Design - Webinar SlidesnQuery
Full webinar available here: https://www.statsols.com/webinar/innovative-strategies-for-successful-trial-design
[Webinar] Innovative Strategies For Successful Trial Design- In this free webinar, you will learn about:
- The challenges facing your trials
- How to calculate the correct sample size
- Worked examples including Mixed/Hierarchical Models
- Posterior Error
- Adaptive Designs For Survival
www.statsols.com
Practical Methods To Overcome Sample Size ChallengesnQuery
Watch the video at: https://www.statsols.com/webinars/practical-methods-to-overcome-sample-size-challenges
In this webinar hosted by Ronan Fitzpatrick - Head of Statistics and nQuery Lead Researcher at Statsols - we will examine some of the most common practical challenges you will experience while calculating sample size for your study. These challenges will be split into two categories:
1. Overcoming Sample Size Calculation Challenges
(Survival Analysis Example)
We will examine practical methods to overcome common sample size calculation issues by focusing in on one of the more complex areas for sample size determination; Survival analysis. We will cover difficulties and potential issues surrounding challenges such as:
Drop Out: How to deal with expected dropouts or censoring. We compare the simple loss-to-follow-up method and integrating a dropout process into the sample size model?
Planning Uncertainty: How best to deal with the inevitable uncertainty at the planning stage? We examine how best to apply a sensitivity analysis and Bayesian approaches to explore the uncertainty in your sample size calculations.
Choosing the Effect Size: Various approaches and interpretations exist for how to find the effect size value. We examine those contrasting interpretations and determine the best method and also how to deal with parameterization options.
2. Overcoming Study Design Challenges
(Vaccine Efficacy Example)
The Randomised Controlled Trial (RCT) is considered the gold standard in trial design in drug development. However, there are often practical impediments which mean that adjustments or pragmatic approaches are needed for some trials and studies.
We will examine practical methods how to overcome common study design challenges and how these affect your sample size calculations. In this webinar, we will use common issues in vaccine study design to examine difficulties surrounding issues such as:
Case-Control Analysis: We will examine how to deal with study constraints and how to deal with analyses done during an observational study.
Alternative Randomization Methods: How best to address randomization in your vaccine trial design when full randomization is difficult, expensive or impractical. We examine how sample size calculations are affected with cluster or Mendelian randomization.
Rare Events: How does an outcome being rare affect the types of study design and statistical methods chosen in your study.
Minimizing Risk In Phase II and III Sample Size CalculationnQuery
[ Watch Webinar: http://bit.ly/2thIgmi ]. In this free webinar, Head of Statistics at Statsols, Ronan Fitzpatrick, addresses the issues of reducing risk in Phase II/III sample size calculations. Topics covered will include:
Sample Size Determination For Different Trial Designs
Bayesian Sample Size Determination
Sample Size For Survival Analysis
& more
2019/ 10/ 11 Originality Report
iginalityReport/ultra?attemptId=929ba456-7401-4cb2-8818-df87061d712f&course_id=_65931_1&includeDeleted=true&print=true&download=true… 1/3
%33
SafeAssign Originality Report
(Current Semester - الفصل الحالي)HCM-506: Appli… • Turnitin Plagiarism Checker
%33Total Score: Medium risk
MOHAMMED ALAHMARI
Submission UUID: b770c976-29f1-45b0-6be2-a23c9f149d8d
Total Number of Repo…
1
Highest Match
33 %
133333.docx
Average Match
33 %
Submitted on
10/11/19
10:36 AM GMT+3
Average Word Count
362
Highest: 133333.docx
%33Attachment 1
Global database (3)
Student paper Student paper Student paper
Top sources (3)
Excluded sources (0)
View Originality Report - Old Design
Word Count: 362
133333.docx
3 2 1
3 Student paper 2 Student paper 1 Student paper
https://lms.seu.edu.sa/webapps/mdb-sa-BBLEARN/originalityReport?attemptId=929ba456-7401-4cb2-8818-df87061d712f&course_id=_65931_1&download=true&includeDeleted=true&print=true&force=true
2019/ 10/ 11 Originality Report
iginalityReport/ultra?attemptId=929ba456-7401-4cb2-8818-df87061d712f&course_id=_65931_1&includeDeleted=true&print=true&download=true… 2/3
Source Matches (6)
Student paper
86%
Student paper
98%
Student paper
96%
Student paper
92%
Module 13
Name: Date:
Kaplan-Meier plots: The goal is to estimate a population survival curve from a sample. i) If every patient is followed until
death, the curve may be estimated simply by computing the fraction surviving at each time. ii) However, in most studies
patients tend to drop out, become lost to follow-up, move away, etc. iii) A Kaplan-Meier analysis allows estimation of
survival over time, even when points drop out or are studied for different lengths of time.
H0 The survival time is not related to the patient treatment group. (Null Hypothesis) H1 The survival time is related to the
patient treatment group. (Alternative Hypothesis)
Baseline Survivor Function (at predictor means)... 0.0000 0.8465 1.0000 0.3114
Interpretation of the curve: Vertical axis represents estimated probability of survival for a hypothetical cohort, not actual %
surviving. • Precision of estimates depends on # observations; therefore, estimates at left-hand side are more precise than at
right-hand side (because of small #’s due to deaths and dropouts). • Curves may give the impression that a given event
occurs more frequently early than late, because of high survival rate and large # people at beginning
Yes, Kaplan-Meier survival curves prognostic indicator of UCEC because it can be generated by selecting the cancer type,
survival type and protein(s) of interest. This technique may also be used for breast, colon and other cancers because its a
statistical technique which gives proper result of survival data of the patients undergone cancer or other treatment.
References Hazra, A., & Gogtay, N. (2016). Biostatistics series module 6: Correlation and linear regression. Indian Journal of
Dermatology, 61(6), 593-601. Retr.
Due to the advancements in various data acquisition and storage technologies, different disciplines have attained the ability to not only accumulate a wide variety of data but also to monitor observations over longer time periods. In many real-world applications, the primary objective of monitoring these observations is to estimate when a particular event of interest will occur in the future. One of the major difficulties in handling such problem is the presence of censoring, i.e., the event of interests is unobservable in some instance which is either because of time limitation or losing track. Due to censoring, standard statistical and machine learning based predictive models cannot readily be applied to analyze the data. An important subfield of statistics called survival analysis provides different mechanisms to handle such censored data problems. In addition to the presence of censoring, such time-to-event data also encounters several other research challenges such as instance/feature correlations, high-dimensionality, temporal dependencies, and difficulty in acquiring sufficient event data in a reasonable amount of time. To tackle such practical concerns, the data mining and machine learning communities have started to develop more sophisticated and effective algorithms that either complement or compete with the traditional statistical methods in survival analysis. In spite of the importance of this problem and relevance to real-world applications, this research topic is scattered across various disciplines. In this tutorial, we will provide a comprehensive and structured overview of both statistical and machine learning based survival analysis methods along with different applications. We will also discuss the commonly used evaluation metrics and other related topics. The material will be coherently organized and presented to help the audience get a clear picture of both the fundamentals and the state-of-the-art techniques.
Re-analysis of the Cochrane Library data and heterogeneity challengesEvangelos Kontopantelis
Heterogeneity issues and a re-analysis of the Cochrane Library data. Presented in the 35th Annual Conference of the International Society for Clinical Biostatistics (ISCB35) in Vienna
Seminar of U.V. Spectroscopy by SAMIR PANDASAMIR PANDA
Spectroscopy is a branch of science dealing the study of interaction of electromagnetic radiation with matter.
Ultraviolet-visible spectroscopy refers to absorption spectroscopy or reflect spectroscopy in the UV-VIS spectral region.
Ultraviolet-visible spectroscopy is an analytical method that can measure the amount of light received by the analyte.
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...University of Maribor
Slides from:
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Track: Artificial Intelligence
https://www.etran.rs/2024/en/home-english/
This presentation explores a brief idea about the structural and functional attributes of nucleotides, the structure and function of genetic materials along with the impact of UV rays and pH upon them.
What is greenhouse gasses and how many gasses are there to affect the Earth.moosaasad1975
What are greenhouse gasses how they affect the earth and its environment what is the future of the environment and earth how the weather and the climate effects.
7. Common Questions on Methods
1. Which endpoint(s) should I use for an oncology clinical
trial?
2. Which model(s) are best for analysing oncology time-
to-event data?
3. How to deal with covariates, stratification and
censoring issues?
4. What is appropriate sample size for my oncology
clinical trial?
8. Oncology Endpoints Overview
Oncology clinical trials primarily interested in survival
analysis, typically for time-to-event endpoint
Different choices for appropriate endpoint for trial
Overall survival, progression free survival, overall response etc.
Important considerations are cancer type, stage, purpose
of treatment, and expected duration of survival
Endpoint & model should reflect assumptions regarding
disease, trend and study constraints (e.g. # subjects)
10. Covariates and Stratification
To adjust for covariates, Cox PH regression model provides
flexible approach to modelling outcome
Covariate modelling works similar to other equivalent
regression models
Log-Rank test extensible to stratified case assuming
constant HR across strata
Cox PH regression can also model stratification by
assuming different hazard fn. (not coefficient) per model
11. Survival Analysis is about the expected
duration of time to an event
Methods like log-rank test & Cox Model
Power is related to the number of
events NOT the sample size
Sample Size = Subjects to get # of events
Flexibility expected in survival analysis
methods and estimation
Options will depend on model and
endpoint and affect design choices
Sample Size for Survival Analysis
Source: SEER, NCI
12. “Using an unstratified log-rank test at
the one-sided 2.5% significance level,
a total of 282 events would allow
92.6% power to demonstrate a 33%
risk reduction (hazard ratio for
RAD/placebo of about 0.67, as
calculated from an anticipated 50%
increase in median PFS, from 6
months in placebo arm to 9 months
in the RAD001 arm).”
Source:
nejm.org
Parameter Value
Significance Level (One-Sided) 0.025
Hazard Ratio 0.66667
Power (%) 92.6
Worked Example 1
Source: NEJM (2011)
14. What is the expected survival curve(s) in the group(s)?
Assume parametric approximation? Piece-wise curve?
Survival SSD Design Issues
Effect of unequal follow-up due to accrual period?
What accrual pattern to assume? Maximum follow-up same for all?
How to deal with censoring, dropouts or other risks?
Right-censored? Want to model dropout and/or competing risks?
Effect of subjects crossing over from one group to other?
Appropriate estimand for trial? Enough info to assume before study?
15. Evolution of Survival SSD
Source: D Schoenfeld (1981)
Source: J.M. Lachin & M.A Foulkes (1986)
Source: E Lakatos (1988)
Source: L.S. Freedman (1982)
Move from simple approximations
to more complex models
Initial methods based on normal approx.
for log-rank assuming exponential
Later approximations adjust for unequal
follow-up and dropout
Complex methods (markov,
simulation) allowed more flexibility
Piece-wise curves, crossover etc.
SSD extensions for other survival
models, trial designs & adaptive
Cox, weighted, competing risks
>2 Groups, 1 Group, CRT, CI
Adaptive Design: GSD, SSR etc.
16. “Using an unstratified log-rank test at the one-sided 2.5%
significance level, a total of 282 events would allow
92.6% power to demonstrate a 33% risk reduction
(hazard ratio for RAD/placebo of about 0.67, as calculated
from an anticipated 50% increase in median PFS, from 6
months in placebo arm to 9 months in the RAD001 arm).
With a uniform accrual of approximately 23 patients per
month over 74 weeks and a minimum follow up of 39
weeks, a total of 352 patients would be required to
obtain 282 PFS events, assuming an exponential
progression-free survival distribution with a median of 6
months in the Placebo arm and of 9 months in RAD001
arm.
With an estimated 10% lost to follow up patients, a total
sample size of 392 patients should be randomized.”
Source:
nejm.org
Parameter Value
Significance Level (One-Sided) 0.025
Placebo Median Survival (months) 6
Everolimus Median Survival (months) 9
Hazard Ratio 0.66667
Accrual Period (Weeks) 74
Minimum Follow-Up (Weeks) 39
Power (%) 92.6
Worked Example 2
Source: NEJM (2011)
19. Survival SSR Complications
Unknown follow-up = more interim planning uncertainty
Can be difficult to predict time when interim analysis will occur
Higher # likely in active cohort when interim analysis occurs
Survival adaptive design may need new assumptions
e.g. assumption of constant treatment effect for GSD and SSR
Two ways for increasing interim events: Sample Size, Time
Increase N: biased for early trends; Time: later trends bias?
Freidlin & Korn (2017) talk about optimizing for “best” result
20. 2 early stopping criteria (work similar)
1. Efficacy (α-spending)
2. Futility (β-spending)
Use Lan & DeMets “spending function” to account
for multiple analyses/chances
Spend proportion of total error at each look
More conservative N vs. fixed term analysis
Multiple SF available (incl. custom)
O’Brien-Fleming (conservative), Pocock (liberal)
Power Family, Hwang-Shih-DeCani (flexible)
Different heuristics for efficacy/futility
Conservative SF expected for efficacy
More flexibility for futility (less problematic)
Group Sequential Design
𝛼 𝜏 = 2 1 − Φ
𝑧 𝛼/2
𝜏
O’Brien-Fleming Spending Function
21. Unblinded SSR
SSR suggested when interim effect size is “promising” (Chen et al)
“Promising” user-defined but based on unblinded effect size
Extends GSD with 3rd option: continue, stop early, increase N
Power for optimistic effect but increase N for lower relevant effects
Updated FDA Guidance: Design which “can provide efficiency”
Common criteria proposed for unblinded SSR is conditional power (CP)
Probability of significance given interim data
2 methods here: Chen, DeMets & Lan; Cui, Hung & Wang
1st uses GSD statistics but only penultimate look & high CP
2nd uses weighted statistic but allowed at any look and CP
22. Worked Example 3
Parameter Value
Significance Level (One-Sided) 0.025
Placebo Median Survival (months) 6
Everolimus Median Survival (months) 9
Hazard Ratio 0.66667
Accrual Period (Weeks) 74
Minimum Follow-Up (Weeks) 39
Power (%) 92.6
Parameter Value
Number of Looks 3
Efficacy Bound O’Brien Fleming
Futility Bound Non-Binding
Beta Spending Function Hwang-Shih-DeCani
HSD Parameter -2
Source: NEJM (2011)
Extend previous example to group
sequential design with 2 interim analyses
with O’Brien Fleming efficacy bound and
non-binding Hwang-Shih-DeCani futility
bound with gamma = -2
23. Assume previous group sequential
design with added SSR option
Assume interim HR= 0.8 (from
0.666) and inherit total E of 303
(interim E of 101 and 202) and final
look alpha of 0.23 from GSD.
What will required E for SSR for
Chen-Demets-Lan/Cui-Hung-Wang
assuming maximum events
multiplier of 3?
Parameter Value
Nominal Final Look Sig. Level 0.0231
Initial HR 0.667
Interim HR 0.8
Initial Expected Events (E) 303
Interim Events (2nd Look) 202
Maximum Events 909
Lower CP Bound (CDL/CHW) Derived/40%
Upper CP Bound 92.6%
Worked Example 3
24. Other Topics
SSD for log-rank/Cox extensible to deal with one sample
or >2 sample cases, as well as other adaptive designs
Other models of interest: cure models, frailty models,
delayed effect (immunotherapy), joint OS/PFS GSD
Confidence Interval SSD possible for special cases
Log(HR) if assume normal, one group w/ Type II censoring
Bayesian methods should be possible, nQuery already
provides assurance for survival case
25. Discussion and Conclusions
Oncology clinical trials primarily use survival models
Choice of endpoint important and based on real world issues
Variety of models available for survival/TTE data
Most common: Log-rank and Cox PH Regression model
SSD methods available for these common models
Can also integrate many design issues (accrual, dropout etc.)
Methods for adaptive design advancing quickly
Also more options for ≠2 groups, frailty models, CI, Bayesian…
29. The solution for optimizing clinical trials
PRE-CLINICAL
/ RESEARCH
EARLY PHASE
CONFIRMATORY
POSTMARKETING
Animal Studies
ANOVA / ANCOVA
1000+ Scenarios for Fixed Term,
Adaptive & Bayesian Methods
Survival, Means, Proportions &
Count endpoints
Sample Size Re-Estimation
Group Sequential Trials
Bayesian Assurance
Cross over & personalized medicine
CRM
MCP-Mod
Simon’s Two Stage
Cohort Study
Case-control Study
30. References
Kilickap, S., Demirci, U., Karadurmus, N., Dogan, M., Akinci, B. and Sendur, M.A.N., 2018. Endpoints in
oncology clinical trials. J BUON, 23, pp.1-6.
U.S. Food and Drug Administration, Clinical Trial Endpoints for the Approval of Cancer Drugs and
Biologics. December 2018 Available from:
http://www.fda.gov/downloads/Drugs/GuidanceComplianceRegulatoryInformation/Guidances/
ucm071590.pdf
Klein, J.P. and Moeschberger, M.L., 2006. Survival analysis: techniques for censored and truncated data.
Springer Science & Business Media.
Hosmer, D.W., Lemeshow, S., & May, S., Applied Survival Analysis: Regression Modeling of
Time‐to‐Event Data, Second Edition. Wiley
Collett, D., 2015. Modelling survival data in medical research. CRC press.
Cox, D.R., 1972. Regression models and life‐tables. Journal of the Royal Statistical Society: Series B
(Methodological), 34(2), pp.187-202.
Bariani, G.M., de Celis, F., Anezka, C.R., Precivale, M., Arai, R., Saad, E.D. and Riechelmann, R.P., 2015.
Sample size calculation in oncology trials. American journal of clinical oncology, 38(6), pp.570-574.
31. References
Freedman, L. S. (1982). Tables of the number of patients required in clinical trials using the logrank
test. Statistics in medicine, 1(2), 121-129.
Schoenfeld, D. A. (1983). Sample-size formula for the proportional-hazards regression
model. Biometrics, 499-503.
Lachin, J. M., & Foulkes, M. A. (1986). Evaluation of sample size and power for analyses of survival with
allowance for nonuniform patient entry, losses to follow-up, noncompliance, and
stratification. Biometrics, 507-519.
Lakatos, E. (1988). Sample sizes based on the log-rank statistic in complex clinical trials. Biometrics, 229-
241.
Hsieh, F.Y. and Lavori, P.W., 2000. Sample-size calculations for the Cox proportional hazards regression
model with nonbinary covariates. Controlled clinical trials, 21(6), pp.552-560.
Yao, J. C., et. al. (2011). Everolimus for advanced pancreatic neuroendocrine tumors. New England
Journal of Medicine, 364(6), 514-523.
US Food and Drug Administration. (2019) Adaptive design clinical trials for drugs and biologics
(Guidance). Retrieved from https://www.fda.gov/media/78495/download
32. References
Jennison, C., & Turnbull, B. W. (1999). Group sequential methods with applications to clinical trials. CRC
Press.
Chen, Y. J., DeMets, D. L., & Gordon Lan, K. K. (2004). Increasing the sample size when the unblinded
interim result is promising. Statistics in medicine, 23(7), 1023-1038.
Cui, L., Hung, H. J., & Wang, S. J. (1999). Modification of sample size in group sequential clinical
trials. Biometrics, 55(3), 853-857.
Mehta, C.R. and Pocock, S.J., (2011). Adaptive increase in sample size when interim results are
promising: a practical guide with examples. Statistics in medicine, 30(28), 3267-3284.
Chen, Y. J., Li, C., & Lan, K. G. (2015). Sample size adjustment based on promising interim results and its
application in confirmatory clinical trials. Clinical Trials, 12(6), 584-595
Liu, Y., & Lim, P. (2017). Sample size increase during a survival trial when interim results are
promising. Communications in Statistics-Theory and Methods, 46(14), 6846-6863.
Freidlin, B., & Korn, E. L. (2017). Sample size adjustment designs with time-to-event outcomes: a
caution. Clinical Trials, 14(6), 597-604.
33. References
Phadnis, M.A., 2019. Sample size calculation for small sample single-arm trials for time-to-event data:
Logrank test with normal approximation or test statistic based on exact chi-square distribution?.
Contemporary clinical trials communications, 15, p.100360.
Lachin, J.M., 2013. Sample size and power for a logrank test and Cox proportional hazards model with
multiple groups and strata, or a quantitative covariate with multiple strata. Statistics in medicine,
32(25), pp.4413-4425.
Ren, S., & Oakley, J. E. (2014). Assurance calculations for planning clinical trials with time‐to‐event
outcomes. Statistics in medicine, 33(1), 31-45.
Xu, Z., Zhen, B., Park, Y. and Zhu, B., 2017. Designing therapeutic cancer vaccine trials with delayed
treatment effect. Statistics in medicine, 36(4), pp.592-605.
Chen, L.M., Ibrahim, J.G. and Chu, H., 2014. Sample size determination in shared frailty models for
multivariate time-to-event data. Journal of biopharmaceutical statistics, 24(4), pp.908-923.
Sozu, T., Sugimoto, T., Hamasaki, T. and Evans, S.R., 2015. Sample size determination in clinical trials
with multiple endpoints. New York: Springer.
Editor's Notes
www.Statsols.com/trial
Change bullet point
www.Statsols.com/trial
Group sequential trials differ from the fixed period trials in that the data from the trial is analysed at one or more stages prior to the conclusion of the trial.
As a result the alpha value applied at each analysis or ‘look’ must be adjusted to preserve the overall Type 1 error.
So, in effect you are ‘spending’ some of your alpha at each ‘look’. The alpha values used at each look are calculated based upon the spending function chosen, the number of looks to be taken during the course of the trial as well as the overall Type 1 error rate.