View the video here:
https://www.statsols.com/webinar/innovative-sample-size-methods-for-adaptive-clinical-trials
Given the high failure rates and the increased costs of clinical trials, researchers need innovative design strategies to best optimize financial resources and reduce the risk to patients.
Adaptive designs are emerging as a way to reduce risk and cost associated with clinical trials. The FDA recently published guidance (Innovative Cures Act) and are actively encouraging sponsors to use Adaptive trials.
Adaptive design is a clinical trial design that allows adaptations or modifications to aspects of the trial after its initiation without undermining the validity and integrity of the trial.
In this webinar, Ronan will demonstrate nQuery's new Adaptive module focusing on Sample Size Re-Estimation & Group-Sequential Design.
In this webinar you will learn about:
The pros and cons of adaptive designs
Sample Size Re-Estimation
Group-Sequential Design
Conditional Power
Predictive Power
About the webinar
Flexible Clinical Trial Design | Survival, Stepped-Wedge & MAMS Designs
As clinical trials increase in complexity, the requirement is for trial designs to adapt to these complications.
From dealing with non-proportional hazards in survival analysis to creating seamless Phase II/III clinical trials, it is an exciting time to be involved in clinical trial design and analysis.
In this free webinar, we will explore a select few topics that highlight the additional flexibility available when designing modern clinical trials.
In this free webinar you will learn about:
Flexible Survival Analysis Designs
Non-proportional hazards and other complex survival curves have become of increasing interest, due to being commonly seen in immunotherapy development. This has led to interest in assessing the robustness of standard methods and alternative methods that better adapt to deviations.
In this webinar, we will look at power analysis assuming complex survival curves and the weighted log-rank test as one candidate model to deal with a delayed survival effect.
Stepped-Wedge designs
Cluster-randomized designs are often adopted when there is a high risk of contamination if cluster members were randomized individually. Stepped-wedge designs are useful in cases where it is difficult to apply a particular treatment to half of the clusters at the same time.
In this webinar, we will introduce stepped-wedge designs and provide an insight into the more complex, flexible randomization schedules available.
Multi-Arm Multi-Stage (MAMS)
MAMs designs provide the ability to assess more treatments in less time than could be done with a series of two-arm trials and can offer smaller sample size requirements when compared to that required for the equivalent number of two-arm trials.
In this webinar, we will look at the design of a Group Sequential MAMS design and explore its design requirements.
Duration - 60 minutes
Speaker: Ronan Fitzpatrick, Head of Statistics, Statsols
For more webinars check out https://www.statsols.com/webinars
Sample size for survival analysis - a guide to planning successful clinical t...nQuery
Determining the appropriate number of events needed for survival analysis is a complex task as study planners try to predict what sample size will be needed after accounting for the complications of unequal follow-up, drop-out and treatment crossover.
The statistical, logistical and ethical considerations all complicate life for biostatisticians as issues to balance in planning a survival analysis. However, this complexity has created a need for new analyses and procedures to help the planning process for survival analysis trials.
The wider move from fixed to flexible designs has opened up opportunities for advanced methods such as adaptive design and Bayesian analysis to help deal with the unique complications of planning for survival data but these methods have their own complications that need to be explored too.
2020 trends in biostatistics what you should know about study design - slid...nQuery
2020 Trends In Biostatistics - What you should know about study design.
In this free webinar you will learn about:
-Adaptive designs in confirmatory trials
-Using external data in study planning
-Innovative designs in early-stage trials
To watch the full webinar:
https://www.statsols.com/webinar/2020-trends-in-biostatistics-what-you-should-know-about-study-design
Webinar slides how to reduce sample size ethically and responsiblynQuery
[Webinar] How to reduce sample size...ethically and responsibly | In this free webinar, you will learn various design strategies to help reduce the sample size of your study in an ethical and responsible manner. Practical examples will be used throughout.
Non-inferiority and Equivalence Study design considerations and sample sizenQuery
About the webinar
This webinar examines the role of non-inferiority and equivalence in study design
In this free webinar, you will learn about:
-Regulatory information on this type of study design
-Considerations for study design and your sample size
-Practical worked examples of
--Non-inferiority Testing
--Equivalence Testing
Duration - 60 minutes
Speaker: Ronan Fitzpatrick, Head of Statistics, Statsols
Watch the video at: https://www.statsols.com/webinars
Designing studies with recurrent events | Model choices, pitfalls and group s...nQuery
In this free webinar, we will examine the important design considerations for analyzing recurring events and counts.
Watch the webinar at: https://www.statsols.com/en/webinar/designing-studies-with-recurrent-events
Designing studies with recurrent events (Model choices, pitfalls and group sequential design)
Webinar slides- alternatives to the p-value and power nQuery
What are the alternatives to the p-value & power? What is the next step for sample size determination? We will explore these issues in this free webinar presented by nQuery
About the webinar
Flexible Clinical Trial Design | Survival, Stepped-Wedge & MAMS Designs
As clinical trials increase in complexity, the requirement is for trial designs to adapt to these complications.
From dealing with non-proportional hazards in survival analysis to creating seamless Phase II/III clinical trials, it is an exciting time to be involved in clinical trial design and analysis.
In this free webinar, we will explore a select few topics that highlight the additional flexibility available when designing modern clinical trials.
In this free webinar you will learn about:
Flexible Survival Analysis Designs
Non-proportional hazards and other complex survival curves have become of increasing interest, due to being commonly seen in immunotherapy development. This has led to interest in assessing the robustness of standard methods and alternative methods that better adapt to deviations.
In this webinar, we will look at power analysis assuming complex survival curves and the weighted log-rank test as one candidate model to deal with a delayed survival effect.
Stepped-Wedge designs
Cluster-randomized designs are often adopted when there is a high risk of contamination if cluster members were randomized individually. Stepped-wedge designs are useful in cases where it is difficult to apply a particular treatment to half of the clusters at the same time.
In this webinar, we will introduce stepped-wedge designs and provide an insight into the more complex, flexible randomization schedules available.
Multi-Arm Multi-Stage (MAMS)
MAMs designs provide the ability to assess more treatments in less time than could be done with a series of two-arm trials and can offer smaller sample size requirements when compared to that required for the equivalent number of two-arm trials.
In this webinar, we will look at the design of a Group Sequential MAMS design and explore its design requirements.
Duration - 60 minutes
Speaker: Ronan Fitzpatrick, Head of Statistics, Statsols
For more webinars check out https://www.statsols.com/webinars
Sample size for survival analysis - a guide to planning successful clinical t...nQuery
Determining the appropriate number of events needed for survival analysis is a complex task as study planners try to predict what sample size will be needed after accounting for the complications of unequal follow-up, drop-out and treatment crossover.
The statistical, logistical and ethical considerations all complicate life for biostatisticians as issues to balance in planning a survival analysis. However, this complexity has created a need for new analyses and procedures to help the planning process for survival analysis trials.
The wider move from fixed to flexible designs has opened up opportunities for advanced methods such as adaptive design and Bayesian analysis to help deal with the unique complications of planning for survival data but these methods have their own complications that need to be explored too.
2020 trends in biostatistics what you should know about study design - slid...nQuery
2020 Trends In Biostatistics - What you should know about study design.
In this free webinar you will learn about:
-Adaptive designs in confirmatory trials
-Using external data in study planning
-Innovative designs in early-stage trials
To watch the full webinar:
https://www.statsols.com/webinar/2020-trends-in-biostatistics-what-you-should-know-about-study-design
Webinar slides how to reduce sample size ethically and responsiblynQuery
[Webinar] How to reduce sample size...ethically and responsibly | In this free webinar, you will learn various design strategies to help reduce the sample size of your study in an ethical and responsible manner. Practical examples will be used throughout.
Non-inferiority and Equivalence Study design considerations and sample sizenQuery
About the webinar
This webinar examines the role of non-inferiority and equivalence in study design
In this free webinar, you will learn about:
-Regulatory information on this type of study design
-Considerations for study design and your sample size
-Practical worked examples of
--Non-inferiority Testing
--Equivalence Testing
Duration - 60 minutes
Speaker: Ronan Fitzpatrick, Head of Statistics, Statsols
Watch the video at: https://www.statsols.com/webinars
Designing studies with recurrent events | Model choices, pitfalls and group s...nQuery
In this free webinar, we will examine the important design considerations for analyzing recurring events and counts.
Watch the webinar at: https://www.statsols.com/en/webinar/designing-studies-with-recurrent-events
Designing studies with recurrent events (Model choices, pitfalls and group sequential design)
Webinar slides- alternatives to the p-value and power nQuery
What are the alternatives to the p-value & power? What is the next step for sample size determination? We will explore these issues in this free webinar presented by nQuery
Innovative Sample Size Methods For Clinical Trials nQuery
"Innovative Sample Size Methods for Clinical Trials" is hosted to coincide with the Spring 2018 update to nQuery - The leading Sample Size Software.
Hosted by Ronan Fitzpatrick - Head of Statistics and nQuery Lead Researcher at Statsols - you'll learn about the benefits of a range of procedures and how you can implement them into your work:
1) Dose-escalation with the Bayesian Continual Reassessment Method
CRM is a growing alternative to the 3+3 method for Phase I trials finding the Maximum Tolerated Dose (MTD).
See how researchers can overcome 3+3 drawbacks to easily find the required sample size for this beneficial alternative for finding the MTD.
2) Bayesian Assurance with Survival Example
This Bayesian alternative to power has experienced a rapid rise in interest and application from researchers.
See how Assurance is being used by researchers to discover the true “probability of success” of a trial.
3) Mendelian Randomization
Mendelian randomization (MR) is a method that allows testing of a causal effect from observational data in the presence of confounding factors.
However, in order to design efficient Mendelian randomization studies, it is essential to calculate the appropriate sample sizes required. We demonstrate what to do to achieve this.
4) Negative Binomial Distribution
Negative binomial model has been increasingly used to model the count data. One of the challenges of applying negative binomial model in clinical trial design is the sample size estimation.
We demonstrate how best to determine the appropriate sample size in the presence of challenges such as unequal follow-up or dispersion.
Extending A Trial’s Design Case Studies Of Dealing With Study Design IssuesnQuery
About the webinar
As trials increase in complexity and scope, there is a requirement for trial designs to reflect this.
From dealing with non-proportional hazards in survival analysis to dealing with cluster randomization, we examine how to deal with study design issues of complex trials.
In this free webinar, you will learn about:
Dealing with study design issues
Practical worked examples of
Non-proportional Hazards
Cluster Randomization
Three Armed Trials
Non-proportional Hazards
Non-proportional hazards and complex survival curves have become of increasing interest, due to being commonly seen in immunotherapy development. This has led to interest in assessing the robustness of standard methods and alternative methods that better adapt to deviations.
In this webinar, we look at methods proposed for complex survival curves and the weighted log-rank test as a candidate model to deal with a delayed survival effect.
Cluster Randomization
Cluster-randomized designs are often adopted when there is a high risk of contamination if cluster members were randomized individually. Stepped-wedge designs are useful in cases where it is difficult to apply a particular treatment to half of the clusters at the same time.
In this webinar, we introduce cluster randomization and stepped-wedge designs to provide an insight into the requirements of more complex randomization schedules.
Three Armed Trials
Non-inferiority testing is a common hypothesis test in the development of generic medicine and medical devices. The most common design compares the proposed non-inferior treatment to the standard treatment alone but this leaves uncertain if the treatment effect is the same as from previous studies. This “assay sensitivity” problem can be resolved by using a three arm trial which includes placebo alongside the new and reference treatments for direct comparison.
In this webinar we show a complete testing approach to this gold standard design and how to find the appropriate allocation and sample size for this study.
Duration - 60 minutes
Speaker: Ronan Fitzpatrick, Head of Statistics, Statsols
Optimizing Oncology Trial Design FAQs & Common IssuesnQuery
Optimizing Oncology Trial Design - FAQs & Common Issues
In this free webinar you will learn about
Endpoints
Models
Covariates, stratification and censoring issues
Adaptive design - complications and opportunities
Sample size determination
& more
In this free webinar we will offer guidance on how to optimize your oncology trial design. Specifically, we will examine the frequently asked questions and common issues that arise.
https://www.statsols.com/webinar/optimizing-oncology-trial-design
Webinar slides sample size for survival analysis - a guide to planning succ...nQuery
Determining the appropriate number of events needed for survival analysis is a complex task as study planners try to predict what sample size will be needed after accounting for the complications of unequal follow-up, drop-out and treatment crossover.
The statistical, logistical and ethical considerations all complicate life for biostatisticians as issues to balance in planning a survival analysis. However, this complexity has created a need for new analyses and procedures to help the planning process for survival analysis trials.
The wider move from fixed to flexible designs has opened up opportunities for advanced methods such as adaptive design and Bayesian analysis to help deal with the unique complications of planning for survival data but these methods have their own complications that need to be explored too.
5 essential steps for sample size determination in clinical trials slidesharenQuery
In this free webinar hosted by nQuery Researcher & Statistician Eimear Keyes, we map out the 5 essential steps for sample size determination in clinical trials. At each step, Eimear will highlight the important function it plays and how to avoid the errors that will negatively impact your sample size determination and therefore your study.
Watch the Video: https://www.statsols.com/webinar/the-5-essential-steps-for-sample-size-determination
Minimizing Risk In Phase II and III Sample Size CalculationnQuery
[ Watch Webinar: http://bit.ly/2thIgmi ]. In this free webinar, Head of Statistics at Statsols, Ronan Fitzpatrick, addresses the issues of reducing risk in Phase II/III sample size calculations. Topics covered will include:
Sample Size Determination For Different Trial Designs
Bayesian Sample Size Determination
Sample Size For Survival Analysis
& more
Innovative Strategies For Successful Trial Design - Webinar SlidesnQuery
Full webinar available here: https://www.statsols.com/webinar/innovative-strategies-for-successful-trial-design
[Webinar] Innovative Strategies For Successful Trial Design- In this free webinar, you will learn about:
- The challenges facing your trials
- How to calculate the correct sample size
- Worked examples including Mixed/Hierarchical Models
- Posterior Error
- Adaptive Designs For Survival
www.statsols.com
Practical Methods To Overcome Sample Size ChallengesnQuery
Watch the video at: https://www.statsols.com/webinars/practical-methods-to-overcome-sample-size-challenges
In this webinar hosted by Ronan Fitzpatrick - Head of Statistics and nQuery Lead Researcher at Statsols - we will examine some of the most common practical challenges you will experience while calculating sample size for your study. These challenges will be split into two categories:
1. Overcoming Sample Size Calculation Challenges
(Survival Analysis Example)
We will examine practical methods to overcome common sample size calculation issues by focusing in on one of the more complex areas for sample size determination; Survival analysis. We will cover difficulties and potential issues surrounding challenges such as:
Drop Out: How to deal with expected dropouts or censoring. We compare the simple loss-to-follow-up method and integrating a dropout process into the sample size model?
Planning Uncertainty: How best to deal with the inevitable uncertainty at the planning stage? We examine how best to apply a sensitivity analysis and Bayesian approaches to explore the uncertainty in your sample size calculations.
Choosing the Effect Size: Various approaches and interpretations exist for how to find the effect size value. We examine those contrasting interpretations and determine the best method and also how to deal with parameterization options.
2. Overcoming Study Design Challenges
(Vaccine Efficacy Example)
The Randomised Controlled Trial (RCT) is considered the gold standard in trial design in drug development. However, there are often practical impediments which mean that adjustments or pragmatic approaches are needed for some trials and studies.
We will examine practical methods how to overcome common study design challenges and how these affect your sample size calculations. In this webinar, we will use common issues in vaccine study design to examine difficulties surrounding issues such as:
Case-Control Analysis: We will examine how to deal with study constraints and how to deal with analyses done during an observational study.
Alternative Randomization Methods: How best to address randomization in your vaccine trial design when full randomization is difficult, expensive or impractical. We examine how sample size calculations are affected with cluster or Mendelian randomization.
Rare Events: How does an outcome being rare affect the types of study design and statistical methods chosen in your study.
An introduction to the stepped wedge cluster randomised trial Karla hemming
This set of slides introduces the SW-CRT within the context of several examples and specifically looks at some seemingly paradoxical results in the analysis of one SW-CRT.
An introduction to the stepped wedge cluster randomised trial, by Dr Karla Hemming for the CLAHRC West Midlands Scientific Advisory Group meeting, 9th June 2015, Birmingham, UK
Combination of informative biomarkers in small pilot studies and estimation ...LEGATO project
Background:
Biomarker candidates are defined as measurable molecules found in biological media. According to Biomarkers Definitions Working Group, 2001, biomarkers cover a rather wide range of parameters. Recently, biomarkers are used widely in medical researches, where single biomarkers may not possess the desired cause-effect association for disease classification and outcome prediction. Therefore the efforts of the researchers currently is to combine biomarkers. By new technologies like microarrays, next generation sequencing and mass spectrometry, researchers can obtain many biomarker candidates that can exceed tens of thousands. To avoid wasting money and time, it is suggested to control the number of patients strictly. However, pilot studies usually have low statistical power which reduces the chance of detecting a true effect .
Sample size and how to calculate it
- Why sample size is important
- Alpha and beta errors
- Main outcome and Effect size
- Practical examples using Means-Proportions-Correlation- Confidence Interval
Bayesian Assurance: Formalizing Sensitivity Analysis For Sample SizenQuery
Title: Bayesian Assurance: Formalizing Sensitivity Analysis For Sample Size
Duration: 60 minutes
Speaker: Ronan Fitzpatrick, Head of Statistics, Statsols
Watch Here: http://bit.ly/2ndRG4B
In this webinar you’ll learn about:
Benefits of Sensitivity Analysis: What does the researcher gain by conducting a sensitivity analysis?
Why isn't Sensitivity Analysis formalized: Why does sensitivity analysis still lack the type of formalized rules and grounding to make it a routine part of sample size determination in every field?
How Bayesian Assurance works: Using Bayesian Assurance provides key contextual information on what is likely to happen over the total range possible values rather than the small number of fixed points used in a sensitivity analysis
Elicitation & SHELF: How expert opinion is elicited and then how to integrate these opinions with each other plus prior data using the Sheffield Elicitation Framework (SHELF)
Why use in both Frequentist or Bayesian analysis: How and why these methods can be used for studies which will use Frequentist or Bayesian methods in their final analysis
Plus more
Innovative Sample Size Methods For Clinical Trials nQuery
"Innovative Sample Size Methods for Clinical Trials" is hosted to coincide with the Spring 2018 update to nQuery - The leading Sample Size Software.
Hosted by Ronan Fitzpatrick - Head of Statistics and nQuery Lead Researcher at Statsols - you'll learn about the benefits of a range of procedures and how you can implement them into your work:
1) Dose-escalation with the Bayesian Continual Reassessment Method
CRM is a growing alternative to the 3+3 method for Phase I trials finding the Maximum Tolerated Dose (MTD).
See how researchers can overcome 3+3 drawbacks to easily find the required sample size for this beneficial alternative for finding the MTD.
2) Bayesian Assurance with Survival Example
This Bayesian alternative to power has experienced a rapid rise in interest and application from researchers.
See how Assurance is being used by researchers to discover the true “probability of success” of a trial.
3) Mendelian Randomization
Mendelian randomization (MR) is a method that allows testing of a causal effect from observational data in the presence of confounding factors.
However, in order to design efficient Mendelian randomization studies, it is essential to calculate the appropriate sample sizes required. We demonstrate what to do to achieve this.
4) Negative Binomial Distribution
Negative binomial model has been increasingly used to model the count data. One of the challenges of applying negative binomial model in clinical trial design is the sample size estimation.
We demonstrate how best to determine the appropriate sample size in the presence of challenges such as unequal follow-up or dispersion.
Extending A Trial’s Design Case Studies Of Dealing With Study Design IssuesnQuery
About the webinar
As trials increase in complexity and scope, there is a requirement for trial designs to reflect this.
From dealing with non-proportional hazards in survival analysis to dealing with cluster randomization, we examine how to deal with study design issues of complex trials.
In this free webinar, you will learn about:
Dealing with study design issues
Practical worked examples of
Non-proportional Hazards
Cluster Randomization
Three Armed Trials
Non-proportional Hazards
Non-proportional hazards and complex survival curves have become of increasing interest, due to being commonly seen in immunotherapy development. This has led to interest in assessing the robustness of standard methods and alternative methods that better adapt to deviations.
In this webinar, we look at methods proposed for complex survival curves and the weighted log-rank test as a candidate model to deal with a delayed survival effect.
Cluster Randomization
Cluster-randomized designs are often adopted when there is a high risk of contamination if cluster members were randomized individually. Stepped-wedge designs are useful in cases where it is difficult to apply a particular treatment to half of the clusters at the same time.
In this webinar, we introduce cluster randomization and stepped-wedge designs to provide an insight into the requirements of more complex randomization schedules.
Three Armed Trials
Non-inferiority testing is a common hypothesis test in the development of generic medicine and medical devices. The most common design compares the proposed non-inferior treatment to the standard treatment alone but this leaves uncertain if the treatment effect is the same as from previous studies. This “assay sensitivity” problem can be resolved by using a three arm trial which includes placebo alongside the new and reference treatments for direct comparison.
In this webinar we show a complete testing approach to this gold standard design and how to find the appropriate allocation and sample size for this study.
Duration - 60 minutes
Speaker: Ronan Fitzpatrick, Head of Statistics, Statsols
Optimizing Oncology Trial Design FAQs & Common IssuesnQuery
Optimizing Oncology Trial Design - FAQs & Common Issues
In this free webinar you will learn about
Endpoints
Models
Covariates, stratification and censoring issues
Adaptive design - complications and opportunities
Sample size determination
& more
In this free webinar we will offer guidance on how to optimize your oncology trial design. Specifically, we will examine the frequently asked questions and common issues that arise.
https://www.statsols.com/webinar/optimizing-oncology-trial-design
Webinar slides sample size for survival analysis - a guide to planning succ...nQuery
Determining the appropriate number of events needed for survival analysis is a complex task as study planners try to predict what sample size will be needed after accounting for the complications of unequal follow-up, drop-out and treatment crossover.
The statistical, logistical and ethical considerations all complicate life for biostatisticians as issues to balance in planning a survival analysis. However, this complexity has created a need for new analyses and procedures to help the planning process for survival analysis trials.
The wider move from fixed to flexible designs has opened up opportunities for advanced methods such as adaptive design and Bayesian analysis to help deal with the unique complications of planning for survival data but these methods have their own complications that need to be explored too.
5 essential steps for sample size determination in clinical trials slidesharenQuery
In this free webinar hosted by nQuery Researcher & Statistician Eimear Keyes, we map out the 5 essential steps for sample size determination in clinical trials. At each step, Eimear will highlight the important function it plays and how to avoid the errors that will negatively impact your sample size determination and therefore your study.
Watch the Video: https://www.statsols.com/webinar/the-5-essential-steps-for-sample-size-determination
Minimizing Risk In Phase II and III Sample Size CalculationnQuery
[ Watch Webinar: http://bit.ly/2thIgmi ]. In this free webinar, Head of Statistics at Statsols, Ronan Fitzpatrick, addresses the issues of reducing risk in Phase II/III sample size calculations. Topics covered will include:
Sample Size Determination For Different Trial Designs
Bayesian Sample Size Determination
Sample Size For Survival Analysis
& more
Innovative Strategies For Successful Trial Design - Webinar SlidesnQuery
Full webinar available here: https://www.statsols.com/webinar/innovative-strategies-for-successful-trial-design
[Webinar] Innovative Strategies For Successful Trial Design- In this free webinar, you will learn about:
- The challenges facing your trials
- How to calculate the correct sample size
- Worked examples including Mixed/Hierarchical Models
- Posterior Error
- Adaptive Designs For Survival
www.statsols.com
Practical Methods To Overcome Sample Size ChallengesnQuery
Watch the video at: https://www.statsols.com/webinars/practical-methods-to-overcome-sample-size-challenges
In this webinar hosted by Ronan Fitzpatrick - Head of Statistics and nQuery Lead Researcher at Statsols - we will examine some of the most common practical challenges you will experience while calculating sample size for your study. These challenges will be split into two categories:
1. Overcoming Sample Size Calculation Challenges
(Survival Analysis Example)
We will examine practical methods to overcome common sample size calculation issues by focusing in on one of the more complex areas for sample size determination; Survival analysis. We will cover difficulties and potential issues surrounding challenges such as:
Drop Out: How to deal with expected dropouts or censoring. We compare the simple loss-to-follow-up method and integrating a dropout process into the sample size model?
Planning Uncertainty: How best to deal with the inevitable uncertainty at the planning stage? We examine how best to apply a sensitivity analysis and Bayesian approaches to explore the uncertainty in your sample size calculations.
Choosing the Effect Size: Various approaches and interpretations exist for how to find the effect size value. We examine those contrasting interpretations and determine the best method and also how to deal with parameterization options.
2. Overcoming Study Design Challenges
(Vaccine Efficacy Example)
The Randomised Controlled Trial (RCT) is considered the gold standard in trial design in drug development. However, there are often practical impediments which mean that adjustments or pragmatic approaches are needed for some trials and studies.
We will examine practical methods how to overcome common study design challenges and how these affect your sample size calculations. In this webinar, we will use common issues in vaccine study design to examine difficulties surrounding issues such as:
Case-Control Analysis: We will examine how to deal with study constraints and how to deal with analyses done during an observational study.
Alternative Randomization Methods: How best to address randomization in your vaccine trial design when full randomization is difficult, expensive or impractical. We examine how sample size calculations are affected with cluster or Mendelian randomization.
Rare Events: How does an outcome being rare affect the types of study design and statistical methods chosen in your study.
An introduction to the stepped wedge cluster randomised trial Karla hemming
This set of slides introduces the SW-CRT within the context of several examples and specifically looks at some seemingly paradoxical results in the analysis of one SW-CRT.
An introduction to the stepped wedge cluster randomised trial, by Dr Karla Hemming for the CLAHRC West Midlands Scientific Advisory Group meeting, 9th June 2015, Birmingham, UK
Combination of informative biomarkers in small pilot studies and estimation ...LEGATO project
Background:
Biomarker candidates are defined as measurable molecules found in biological media. According to Biomarkers Definitions Working Group, 2001, biomarkers cover a rather wide range of parameters. Recently, biomarkers are used widely in medical researches, where single biomarkers may not possess the desired cause-effect association for disease classification and outcome prediction. Therefore the efforts of the researchers currently is to combine biomarkers. By new technologies like microarrays, next generation sequencing and mass spectrometry, researchers can obtain many biomarker candidates that can exceed tens of thousands. To avoid wasting money and time, it is suggested to control the number of patients strictly. However, pilot studies usually have low statistical power which reduces the chance of detecting a true effect .
Sample size and how to calculate it
- Why sample size is important
- Alpha and beta errors
- Main outcome and Effect size
- Practical examples using Means-Proportions-Correlation- Confidence Interval
Bayesian Assurance: Formalizing Sensitivity Analysis For Sample SizenQuery
Title: Bayesian Assurance: Formalizing Sensitivity Analysis For Sample Size
Duration: 60 minutes
Speaker: Ronan Fitzpatrick, Head of Statistics, Statsols
Watch Here: http://bit.ly/2ndRG4B
In this webinar you’ll learn about:
Benefits of Sensitivity Analysis: What does the researcher gain by conducting a sensitivity analysis?
Why isn't Sensitivity Analysis formalized: Why does sensitivity analysis still lack the type of formalized rules and grounding to make it a routine part of sample size determination in every field?
How Bayesian Assurance works: Using Bayesian Assurance provides key contextual information on what is likely to happen over the total range possible values rather than the small number of fixed points used in a sensitivity analysis
Elicitation & SHELF: How expert opinion is elicited and then how to integrate these opinions with each other plus prior data using the Sheffield Elicitation Framework (SHELF)
Why use in both Frequentist or Bayesian analysis: How and why these methods can be used for studies which will use Frequentist or Bayesian methods in their final analysis
Plus more
Bayesian Approaches To Improve Sample Size WebinarnQuery
Title: Bayesian Approaches To Improve Sample Size
Duration: 60 minutes
Speaker: Ronan Fitzpatrick, Head of Statistics, Statsols
In this webinar you'll learn about:
Bayesian Sample Size Determination: See how the growth of Bayesian analysis has helped transform our ideas about statistical inference and methodologies in clinical trials
Bayesian Assurance: Get an informative answer on how likely it is to see a “positive” outcome from the trial and then make better decisions on what trials to back
Posterior Credible Intervals and Mixed Bayesian Likelihood: Enable researchers to use prior information from pilot studies and other sources to make quicker and better decisions
Plus much more
Cluster randomised trials with excessive cluster sizes: ethical and design im...Karla hemming
Investigators submitting funding applications strive for nominal levels of power to ensure their applications are competitive. If the number of clusters is limited this might mean large clusters are needed to achieve that power; but a slightly lower power might be achievable with a drastic reduction in cluster sizes. Alternatively, increasing the number of clusters minimally might mean the desired level of power is achievable, again with a drastic reduction in cluster sizes.
The use of Adaptive designs is becoming quite popular and well-perceived by the regulatory agencies such as the FDA in the US. “Adaptation” can occur in different fashion and potentially make studies more efficient (e.g. shorter duration, fewer patients) more likely to demonstrate an effect of the drug if one exists, or more informative (see “Adaptive Design Clinical Trials for Drugs and Biologics” FDA guidance).
The aim of this presentation is to illustrate a case where an adaptive design was used in a Phase III oncology pivotal study having Overall Survival as a primary end-point. The particular adaptation implemented was an un-blinded SSR that applied a promising zone approach.
The main focus will be how the adaptive design impacted the SDTM modelling, the design of some ADaM datasets (e.g. those containing the time-to-event endpoints and therefore using ADTTE ADaM model) and later on how some mapping and analysis decisions were described in both the study and analysis reviewer guide.
Clinical Research Statistics for Non-StatisticiansBrook White, PMP
Through real-world examples, this presentation teaches strategies for choosing appropriate outcome measures, methods for analysis and randomization, and sample sizes as well as tips for collecting the right data to answer your scientific questions.
Sample Size: A couple more hints to handle it right using SAS and RDave Vanz
Andrii Artemchuk from Intego Group, a Ukrainian offshore staffing company, presented this power point to the audience at a phUSE conference in Frankfurt Germany in 2018 on SAS and R
Embracing GenAI - A Strategic ImperativePeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
The Roman Empire A Historical Colossus.pdfkaushalkr1407
The Roman Empire, a vast and enduring power, stands as one of history's most remarkable civilizations, leaving an indelible imprint on the world. It emerged from the Roman Republic, transitioning into an imperial powerhouse under the leadership of Augustus Caesar in 27 BCE. This transformation marked the beginning of an era defined by unprecedented territorial expansion, architectural marvels, and profound cultural influence.
The empire's roots lie in the city of Rome, founded, according to legend, by Romulus in 753 BCE. Over centuries, Rome evolved from a small settlement to a formidable republic, characterized by a complex political system with elected officials and checks on power. However, internal strife, class conflicts, and military ambitions paved the way for the end of the Republic. Julius Caesar’s dictatorship and subsequent assassination in 44 BCE created a power vacuum, leading to a civil war. Octavian, later Augustus, emerged victorious, heralding the Roman Empire’s birth.
Under Augustus, the empire experienced the Pax Romana, a 200-year period of relative peace and stability. Augustus reformed the military, established efficient administrative systems, and initiated grand construction projects. The empire's borders expanded, encompassing territories from Britain to Egypt and from Spain to the Euphrates. Roman legions, renowned for their discipline and engineering prowess, secured and maintained these vast territories, building roads, fortifications, and cities that facilitated control and integration.
The Roman Empire’s society was hierarchical, with a rigid class system. At the top were the patricians, wealthy elites who held significant political power. Below them were the plebeians, free citizens with limited political influence, and the vast numbers of slaves who formed the backbone of the economy. The family unit was central, governed by the paterfamilias, the male head who held absolute authority.
Culturally, the Romans were eclectic, absorbing and adapting elements from the civilizations they encountered, particularly the Greeks. Roman art, literature, and philosophy reflected this synthesis, creating a rich cultural tapestry. Latin, the Roman language, became the lingua franca of the Western world, influencing numerous modern languages.
Roman architecture and engineering achievements were monumental. They perfected the arch, vault, and dome, constructing enduring structures like the Colosseum, Pantheon, and aqueducts. These engineering marvels not only showcased Roman ingenuity but also served practical purposes, from public entertainment to water supply.
Instructions for Submissions thorugh G- Classroom.pptxJheel Barad
This presentation provides a briefing on how to upload submissions and documents in Google Classroom. It was prepared as part of an orientation for new Sainik School in-service teacher trainees. As a training officer, my goal is to ensure that you are comfortable and proficient with this essential tool for managing assignments and fostering student engagement.
A Strategic Approach: GenAI in EducationPeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdfTechSoup
In this webinar you will learn how your organization can access TechSoup's wide variety of product discount and donation programs. From hardware to software, we'll give you a tour of the tools available to help your nonprofit with productivity, collaboration, financial management, donor tracking, security, and more.
Acetabularia Information For Class 9 .docxvaibhavrinwa19
Acetabularia acetabulum is a single-celled green alga that in its vegetative state is morphologically differentiated into a basal rhizoid and an axially elongated stalk, which bears whorls of branching hairs. The single diploid nucleus resides in the rhizoid.
Synthetic Fiber Construction in lab .pptxPavel ( NSTU)
Synthetic fiber production is a fascinating and complex field that blends chemistry, engineering, and environmental science. By understanding these aspects, students can gain a comprehensive view of synthetic fiber production, its impact on society and the environment, and the potential for future innovations. Synthetic fibers play a crucial role in modern society, impacting various aspects of daily life, industry, and the environment. ynthetic fibers are integral to modern life, offering a range of benefits from cost-effectiveness and versatility to innovative applications and performance characteristics. While they pose environmental challenges, ongoing research and development aim to create more sustainable and eco-friendly alternatives. Understanding the importance of synthetic fibers helps in appreciating their role in the economy, industry, and daily life, while also emphasizing the need for sustainable practices and innovation.
7. Context
SSD finds the appropriate sample size for
your study
Common metrics are statistical power, interval
width or cost
SSD seeks to balance ethical and practical
issues
Crucial to arrive at valid conclusions, Type M/S
errors
High cost of failed clinical trials drug
development
8. Adaptive Trials Overview
Adaptive Trials are any trial where a change
or decision is made to a trial while still on-
going
Encompasses a wide variety of potential
adaptions
E.g. Early stopping, SSR, enrichment, seamless,
dose-finding
Adaptive trials seek to give control to trialist
to improve trial based on all available
information
10. Adaptive Trials Regulatory
Background
Draft FDA CBER/CDER Guidance published in
2010
“Well-understood” and “Less well-understood”
Designs
EMA published similar reflection paper (2007)
Increase in interest in encouraging adaptive
design
US: Innovative Cures Act, EU: Adaptive Pathways
New FDA Guidance expected later this year
Will likely to see proliferation of new designs
11. Sample Size Re-estimation
(SSR)
Will focus here on specific adaptive design of
SSR
Adaptive Trial focused on higher sample size
if needed
Strong adaption target due to intrinsic SSD
uncertainty
Note that more suited to knowable/short follow-
up
Note that could lower sample size but not
encouraged
13. Blinded Sample Size Re-
estimation
BSSR uses interim blinded nuisance
parameter estimate
Use of blinded data reduces logistical/regulatory
issues
Considered a “well understood” type of adaptive
design
Multiple methods but focus on internal pilot
approach
Update N based on parameter estimate from
internal pilot
Use same methods as fixed term trial incl. pilot
14. Blinded SSR nQuery Summary
(Autumn 2018)
Blinded SSR Means
SSR Criteria: Variance
Three σ2 Estimate
Methods
1. Two Sample
Inequality
2. Two Sample NI
3. Two Sample Equiv
Blinded SSR Props
SSR Criteria: Overall
Success Rate
Assumes effect size
true
1. Two Sample
Inequality
2. Two Sample NI
15. Two Sample Mean Blinded SSR
Example
Source: nejm.com
Parameter Value
Significance Level (2-
Sided)
0.05
Mean Difference (%) -9
Standard Deviation (%) 16
Dropout Rate 15%
Target Power 90%
Nuisance Parameter? Standard
Deviation
“We estimated that we
would need to enrol 160
patients, given an
expected mean (±SD)
annual decline in the FVC
of 9±16 percent of the
predicted value and a
dropout rate of 15 percent,
to achieve a two-sided
alpha level of 0.05 and a
statistical power of 90
16. Two Sample Proportion BSSR
Example
“For our sample size
calculations, we assumed that
20% of women in the control arm
would be using LARC after their
6-week postpartum visit, … An
analysis population with at least
626 women (313 in each arm)
was required to provide 80%
power (using a two-sided alpha
of 0.05) to detect an absolute
10% increase to 30% in LARC use
in the intervention arm [14].
Anticipating a maximum drop-
out rate of 20% at the time of
Follow-Up Survey #2, we planned
to randomize 800 participants”
Source: Contraception
Parameter Value
Significance Level (2-
Sided)
0.05
Control Rate 0.2
Intervention Rate 0.3
n per Group 313
Target Power (%) 80%
Nuisance Parameter Overall Success
Rate
18. Group Sequential Designs (GSD)
GSD facilitate interim analyses
Interim analyses occur while trial
on-going
Accrued data analysed at pre-
specified times
E.g. After 1/2 subjects have been
measured
Can stop for benefit or futility
If neither found, continue trial
until end/next interim
Need to account for effect of
multiple analyses
Do this by “spending” α and/or β
errors
GSD Changes
1. Futility Only Designs
2. Additional Outputs
3. New Two Sample
TTE
4. One Sample Mean
GSD
5. One Sample Prop
GSD
19. Error Spending (Lan & DeMets)
Two Criteria for early stopping
1. Efficacy (α-spending)
2. Futility (β-spending)
Multiple Error Spending
Functions
O’Brien Fleming, Pocock etc.
Both α and β spending work
similarly
Can be very liberal or conservative
At each interim analysis,
spending a proportion of the
total error
Makes analysis at endpoint more
conservative
𝛼 𝜏 = 2 1 − Φ
𝑧 𝛼/2
𝜏
20. Group Sequential Example
“A sample size of 242
subjects (121 per treatment
group) provides at least 80%
power to detect a relative
difference of 53% between
botulinum toxin A and
standardized anticholinergic
therapy, assuming a
treatment difference of -0.80
and a common SD of 2.1
(effect size = 0.381), and a
two-sided type I error rate of
5%. Sample size has been
adjusted to allow for a 10%
loss to follow-up over the 6-
months of treatment as well
Parameter Value
Significance Level (2-
sided)
0.05
OnabotulinumtoxinA
Mean
-2.3
Anticholinergic Mean -1.5
Standard Deviation
(Both)
2.1
Power 80%
# Interim Analyses 1
α Spending Function
O’Brien-
Source: NEJM (2012)
21. Conditional Power (CP)
CP gives prob. of rejecting null given interim
test statistic
Calculation still depends on what “true” difference
set to
Often used as ad-hoc criteria for futility testing
in GSD
More flexible than β-spending but less error guarantee
Focus here on CP as measure of “promising”
results
“Promising” meaning less than target but close to
target power
22. Conditional Power & Unblinded SSR
Most common criteria proposed for unblinded SSR
is CP
SSR suggested when interim results “promising”
(Chen et al)
Gives third option vs GSD: continue, stop early,
increase N
“Promising” user-defined but based on unblinded effect
size
Power for optimistic effect but increase N for lower
relevant effects?
2 methods here: Chen, DeMets & Lan; Cui, Hung &
Wang
1st uses GSD statistics but only penultimate look & high
CP
nd
25. Discussion and Conclusions
Adaptive Trials expected to become more
common
Reduction of costs, greater regulatory interest
etc.
SSR will be one common type of adaptive
trial
Blinded SSR already widely accepted, unblinded
growing
Blinded SSR targets initial variance under-
estimates
26. nQuery Spring 2018 Update
Initial release focused on Survival & Bayesian tables.
April release adds 72 new tables in following areas:
New Bayes tables in April
update
New tables in April
update
Epidemiology Non-inferiority/
Equivalence
Correlation/ROC
Bayesian
Sample Size
52 20
27. nQuery Autumn 2018 Update
Autumn 2018 release adds nQuery Adapt module, 32 new tables
& undo/redo
New Core
Tables
Proportions +
Crossover Assurance
Conditional
Power
GST + SSR
20
nQuery Bayes
Tables
12
nQuery Adapt
Tables
15
29. References
Friede, T., & Kieser, M. (2006). Sample size recalculation in internal pilot study designs: a
review. Biometrical Journal: Journal of Mathematical Methods in Biosciences, 48(4), 537-555.
Tang, J. H., Dominik, R. C., Zerden, M. L., Verbiest, S. B., Brody, S. C., & Stuart, G. S. (2014).
Effect of an educational script on postpartum contraceptive use: a randomized controlled
trial. Contraception, 90(2), 162-167.
Tashkin, D. P., Elashoff, R., Clements, P. J., Goldin, J., Roth, M. D., Furst, D. E., ... & Seibold,
J. R. (2006). Cyclophosphamide versus placebo in scleroderma lung disease. New England
Journal of Medicine, 354(25), 2655-2666.
Jennison, C., & Turnbull, B. W. (1999). Group sequential methods with applications to clinical
trials. CRC Press.
Visco, A. G., et al (2012). Anticholinergic therapy vs. onabotulinumtoxina for urgency
urinary incontinence. New England Journal of Medicine, 367(19), 1803-1813.
Chen, Y. J., DeMets, D. L., & Gordon Lan, K. K. (2004). Increasing the sample size when the
unblinded interim result is promising. Statistics in medicine, 23(7), 1023-1038.
Cui, L., Hung, H. J., & Wang, S. J. (1999). Modification of sample size in group sequential
clinical trials. Biometrics, 55(3), 853-857.
Editor's Notes
Point 1:
http://rsos.royalsocietypublishing.org/content/1/3/140216 -> Screening problem analogy.
Type S Error = Sign Error i.e. sign of estimate is different than actual population value
Type M Error = Magnitude Error i.e. estimate is order of magnitude different than actual value
Point 2:
Know we have only 100 subjects available. Need to know what power will this give us, i.e. is there enough power to justify even doing the study.
Stage III clinical trials constitute 90% of trial costs, vital to reduce waste and ensure can fulfil goal.
Point 3:
Sample Size requirements described in ICH Efficacy Guidelines 9: STATISTICAL PRINCIPLES FOR CLINICAL TRIALS
See FDA/NIH draft protocol template here: http://osp.od.nih.gov/sites/default/files/Protocol_Template_05Feb2016_508.pdf (Section 10.5)
Nature Statistical Checklist: http://www.nature.com/nature/authors/gta/Statistical_checklist.doc
Point 4:
In Cohen’s (1962) seminal power analysis of the journal of Abnormal and Social Psychology he concluded that over half of the published studies were insufficiently powered to result in statistical significance for the main hypothesis. Many journals (e.g. Nature) now require that authors submit power estimates for their studies.
Power/Sample size one of areas highlighted when discussing “crisis of reproducibility” (Ioannidis). Relatively easy fix compared to finding p-hacking etc.
Point 1:
http://rsos.royalsocietypublishing.org/content/1/3/140216 -> Screening problem analogy.
Type S Error = Sign Error i.e. sign of estimate is different than actual population value
Type M Error = Magnitude Error i.e. estimate is order of magnitude different than actual value
Point 2:
Know we have only 100 subjects available. Need to know what power will this give us, i.e. is there enough power to justify even doing the study.
Stage III clinical trials constitute 90% of trial costs, vital to reduce waste and ensure can fulfil goal.
Point 3:
Sample Size requirements described in ICH Efficacy Guidelines 9: STATISTICAL PRINCIPLES FOR CLINICAL TRIALS
See FDA/NIH draft protocol template here: http://osp.od.nih.gov/sites/default/files/Protocol_Template_05Feb2016_508.pdf (Section 10.5)
Nature Statistical Checklist: http://www.nature.com/nature/authors/gta/Statistical_checklist.doc
Point 4:
In Cohen’s (1962) seminal power analysis of the journal of Abnormal and Social Psychology he concluded that over half of the published studies were insufficiently powered to result in statistical significance for the main hypothesis. Many journals (e.g. Nature) now require that authors submit power estimates for their studies.
Power/Sample size one of areas highlighted when discussing “crisis of reproducibility” (Ioannidis). Relatively easy fix compared to finding p-hacking etc.
Point 1:
http://rsos.royalsocietypublishing.org/content/1/3/140216 -> Screening problem analogy.
Type S Error = Sign Error i.e. sign of estimate is different than actual population value
Type M Error = Magnitude Error i.e. estimate is order of magnitude different than actual value
Point 2:
Know we have only 100 subjects available. Need to know what power will this give us, i.e. is there enough power to justify even doing the study.
Stage III clinical trials constitute 90% of trial costs, vital to reduce waste and ensure can fulfil goal.
Point 3:
Sample Size requirements described in ICH Efficacy Guidelines 9: STATISTICAL PRINCIPLES FOR CLINICAL TRIALS
See FDA/NIH draft protocol template here: http://osp.od.nih.gov/sites/default/files/Protocol_Template_05Feb2016_508.pdf (Section 10.5)
Nature Statistical Checklist: http://www.nature.com/nature/authors/gta/Statistical_checklist.doc
Point 4:
In Cohen’s (1962) seminal power analysis of the journal of Abnormal and Social Psychology he concluded that over half of the published studies were insufficiently powered to result in statistical significance for the main hypothesis. Many journals (e.g. Nature) now require that authors submit power estimates for their studies.
Power/Sample size one of areas highlighted when discussing “crisis of reproducibility” (Ioannidis). Relatively easy fix compared to finding p-hacking etc.
Point 1:
http://rsos.royalsocietypublishing.org/content/1/3/140216 -> Screening problem analogy.
Type S Error = Sign Error i.e. sign of estimate is different than actual population value
Type M Error = Magnitude Error i.e. estimate is order of magnitude different than actual value
Point 2:
Know we have only 100 subjects available. Need to know what power will this give us, i.e. is there enough power to justify even doing the study.
Stage III clinical trials constitute 90% of trial costs, vital to reduce waste and ensure can fulfil goal.
Point 3:
Sample Size requirements described in ICH Efficacy Guidelines 9: STATISTICAL PRINCIPLES FOR CLINICAL TRIALS
See FDA/NIH draft protocol template here: http://osp.od.nih.gov/sites/default/files/Protocol_Template_05Feb2016_508.pdf (Section 10.5)
Nature Statistical Checklist: http://www.nature.com/nature/authors/gta/Statistical_checklist.doc
Point 4:
In Cohen’s (1962) seminal power analysis of the journal of Abnormal and Social Psychology he concluded that over half of the published studies were insufficiently powered to result in statistical significance for the main hypothesis. Many journals (e.g. Nature) now require that authors submit power estimates for their studies.
Power/Sample size one of areas highlighted when discussing “crisis of reproducibility” (Ioannidis). Relatively easy fix compared to finding p-hacking etc.
Point 1:
http://rsos.royalsocietypublishing.org/content/1/3/140216 -> Screening problem analogy.
Type S Error = Sign Error i.e. sign of estimate is different than actual population value
Type M Error = Magnitude Error i.e. estimate is order of magnitude different than actual value
Point 2:
Know we have only 100 subjects available. Need to know what power will this give us, i.e. is there enough power to justify even doing the study.
Stage III clinical trials constitute 90% of trial costs, vital to reduce waste and ensure can fulfil goal.
Point 3:
Sample Size requirements described in ICH Efficacy Guidelines 9: STATISTICAL PRINCIPLES FOR CLINICAL TRIALS
See FDA/NIH draft protocol template here: http://osp.od.nih.gov/sites/default/files/Protocol_Template_05Feb2016_508.pdf (Section 10.5)
Nature Statistical Checklist: http://www.nature.com/nature/authors/gta/Statistical_checklist.doc
Point 4:
In Cohen’s (1962) seminal power analysis of the journal of Abnormal and Social Psychology he concluded that over half of the published studies were insufficiently powered to result in statistical significance for the main hypothesis. Many journals (e.g. Nature) now require that authors submit power estimates for their studies.
Power/Sample size one of areas highlighted when discussing “crisis of reproducibility” (Ioannidis). Relatively easy fix compared to finding p-hacking etc.
Group sequential trials differ from the fixed period trials in that the data from the trial is analysed at one or more stages prior to the conclusion of the trial.
As a result the alpha value applied at each analysis or ‘look’ must be adjusted to preserve the overall Type 1 error.
So, in effect you are ‘spending’ some of your alpha at each ‘look’. The alpha values used at each look are calculated based upon the spending function chosen, the number of looks to be taken during the course of the trial as well as the overall Type 1 error rate.
Multiple looks = multiple chance to find significance. Need to have adjustment for that. Note that beta-spending actually decreases chance of finding significance (since futility stops future alpha tests) and thus actually inflates critical p-value. See futility example only.
Actual mean values taken from elsewhere in paper.
Group sequential trials differ from the fixed period trials in that the data from the trial is analysed at one or more stages prior to the conclusion of the trial.
As a result the alpha value applied at each analysis or ‘look’ must be adjusted to preserve the overall Type 1 error.
So, in effect you are ‘spending’ some of your alpha at each ‘look’. The alpha values used at each look are calculated based upon the spending function chosen, the number of looks to be taken during the course of the trial as well as the overall Type 1 error rate.
Group sequential trials differ from the fixed period trials in that the data from the trial is analysed at one or more stages prior to the conclusion of the trial.
As a result the alpha value applied at each analysis or ‘look’ must be adjusted to preserve the overall Type 1 error.
So, in effect you are ‘spending’ some of your alpha at each ‘look’. The alpha values used at each look are calculated based upon the spending function chosen, the number of looks to be taken during the course of the trial as well as the overall Type 1 error rate.
Point 1:
http://rsos.royalsocietypublishing.org/content/1/3/140216 -> Screening problem analogy.
Type S Error = Sign Error i.e. sign of estimate is different than actual population value
Type M Error = Magnitude Error i.e. estimate is order of magnitude different than actual value
Point 2:
Know we have only 100 subjects available. Need to know what power will this give us, i.e. is there enough power to justify even doing the study.
Stage III clinical trials constitute 90% of trial costs, vital to reduce waste and ensure can fulfil goal.
Point 3:
Sample Size requirements described in ICH Efficacy Guidelines 9: STATISTICAL PRINCIPLES FOR CLINICAL TRIALS
See FDA/NIH draft protocol template here: http://osp.od.nih.gov/sites/default/files/Protocol_Template_05Feb2016_508.pdf (Section 10.5)
Nature Statistical Checklist: http://www.nature.com/nature/authors/gta/Statistical_checklist.doc
Point 4:
In Cohen’s (1962) seminal power analysis of the journal of Abnormal and Social Psychology he concluded that over half of the published studies were insufficiently powered to result in statistical significance for the main hypothesis. Many journals (e.g. Nature) now require that authors submit power estimates for their studies.
Power/Sample size one of areas highlighted when discussing “crisis of reproducibility” (Ioannidis). Relatively easy fix compared to finding p-hacking etc.