About the webinar
This webinar examines the role of non-inferiority and equivalence in study design
In this free webinar, you will learn about:
-Regulatory information on this type of study design
-Considerations for study design and your sample size
-Practical worked examples of
--Non-inferiority Testing
--Equivalence Testing
Duration - 60 minutes
Speaker: Ronan Fitzpatrick, Head of Statistics, Statsols
Watch the video at: https://www.statsols.com/webinars
Superiority, Equivalence, and Non-Inferiority Trial DesignsKevin Clauson
http://bit.ly/bQKcGz This lecture was presented as part of the Drug Literature Evaluation course at Nova Southeastern University. Guided notes and an audience response system were used to augment to lecture. Context for my decision to share these slides can be found at the provided link.
Population pharmacokinetics is the study of the sources and correlates of variability in drug concentrations among individuals who are the target patient population receiving clinically relevant doses of a drug of interest
Pharmacoeconomics evaluates the costs and outcomes of pharmaceutical products and programs. It aims to improve resource allocation and healthcare spending. The document outlines key pharmacoeconomics concepts like perspectives, methodologies, and applications. Cost-benefit analysis compares monetary costs and benefits. Cost-effectiveness analysis expresses outcomes in natural units to compare cost per outcome. Pharmacoeconomics informs decisions on drug development, reimbursement, and policy to optimize value from limited healthcare resources.
This document provides an introduction to pharmacoepidemiology. It defines pharmacovigilance and pharmacoepidemiology, and discusses key epidemiological principles. It describes how epidemiological methods are applied in pharmacovigilance, including signal detection, creating a pharmacovigilance plan with a safety specification and PV plan, and using technical solutions like AERS and Q-Scan for pharmacoepidemiology.
Bioavailability and Bioequivalence Studies (BABE) & Concept of BiowaiversJaspreet Guraya
The presentation gives an insight on BABE studies, mathematical and statistical procedures involved in designing these studies, the official guidelines regarding study design. In the later part it also discusses about biowaivers and their role.
This document discusses the various phases of clinical trials for drug development. It begins with preclinical testing on animals. Next is the investigational new drug application (IND) to obtain approval for human testing. Clinical trials then proceed through four phases to evaluate a drug's safety, efficacy, and dosing in humans. Phase 1 involves initial safety and dosing tests on healthy volunteers. Phase 2 expands to more subjects to further assess safety and efficacy. Phase 3 involves large-scale trials to confirm effectiveness. After approval, Phase 4 involves post-marketing surveillance. The overall goal is to generate sufficient data to submit a new drug application (NDA) and gain regulatory approval to market a new pharmaceutical drug.
Here are the designs I would recommend for each case:
Case 1: N-of-1 design. This design is well-suited for testing the efficacy of a treatment for an individual patient, as in this case assessing L-arginine for a carrier of OTCD.
Case 2: Randomized withdrawal design. This minimizes time on placebo by giving all patients open-label treatment initially to identify responders, who are then randomized to continue treatment or placebo. This is appropriate given the reversible but relatively slow outcome.
Case 3: Delayed start design. This can distinguish treatment effects on symptoms from effects on disease progression, which is important given the primary endpoint of changes on the UPDRS scale for Parkinson
Superiority, Equivalence, and Non-Inferiority Trial DesignsKevin Clauson
http://bit.ly/bQKcGz This lecture was presented as part of the Drug Literature Evaluation course at Nova Southeastern University. Guided notes and an audience response system were used to augment to lecture. Context for my decision to share these slides can be found at the provided link.
Population pharmacokinetics is the study of the sources and correlates of variability in drug concentrations among individuals who are the target patient population receiving clinically relevant doses of a drug of interest
Pharmacoeconomics evaluates the costs and outcomes of pharmaceutical products and programs. It aims to improve resource allocation and healthcare spending. The document outlines key pharmacoeconomics concepts like perspectives, methodologies, and applications. Cost-benefit analysis compares monetary costs and benefits. Cost-effectiveness analysis expresses outcomes in natural units to compare cost per outcome. Pharmacoeconomics informs decisions on drug development, reimbursement, and policy to optimize value from limited healthcare resources.
This document provides an introduction to pharmacoepidemiology. It defines pharmacovigilance and pharmacoepidemiology, and discusses key epidemiological principles. It describes how epidemiological methods are applied in pharmacovigilance, including signal detection, creating a pharmacovigilance plan with a safety specification and PV plan, and using technical solutions like AERS and Q-Scan for pharmacoepidemiology.
Bioavailability and Bioequivalence Studies (BABE) & Concept of BiowaiversJaspreet Guraya
The presentation gives an insight on BABE studies, mathematical and statistical procedures involved in designing these studies, the official guidelines regarding study design. In the later part it also discusses about biowaivers and their role.
This document discusses the various phases of clinical trials for drug development. It begins with preclinical testing on animals. Next is the investigational new drug application (IND) to obtain approval for human testing. Clinical trials then proceed through four phases to evaluate a drug's safety, efficacy, and dosing in humans. Phase 1 involves initial safety and dosing tests on healthy volunteers. Phase 2 expands to more subjects to further assess safety and efficacy. Phase 3 involves large-scale trials to confirm effectiveness. After approval, Phase 4 involves post-marketing surveillance. The overall goal is to generate sufficient data to submit a new drug application (NDA) and gain regulatory approval to market a new pharmaceutical drug.
Here are the designs I would recommend for each case:
Case 1: N-of-1 design. This design is well-suited for testing the efficacy of a treatment for an individual patient, as in this case assessing L-arginine for a carrier of OTCD.
Case 2: Randomized withdrawal design. This minimizes time on placebo by giving all patients open-label treatment initially to identify responders, who are then randomized to continue treatment or placebo. This is appropriate given the reversible but relatively slow outcome.
Case 3: Delayed start design. This can distinguish treatment effects on symptoms from effects on disease progression, which is important given the primary endpoint of changes on the UPDRS scale for Parkinson
Bioavailability and Bioequivalence StudiesPranav Sopory
BA and BE studies.
Seminar presented in All India Institute of Medical Sciences (AIIMS - New Delhi).
Focus in Pharmacokinetic parameters (Cmax, AUC)
Single dose PK study, Steady state PK study, Modified drug release PK study, In vivo mechanisms, invitro mechanisms, Pharmacodynamic Study, Comparatice Clinical Trials. Biowavers and Biosimilimars.
Reference: CDSCO guideline, USFDA guideline, ICH guidelines
Clinical trials phases: Phase 0 to 4: An OverviewArchana Gawade
Clinical drug trials are conducted in phases:
1. Phase 0 involves micro-dosing to obtain preliminary pharmacokinetic data with minimal risk.
2. Phase 1 studies assess safety and side effects in healthy volunteers to determine safe dosage levels.
3. Phase 2 studies evaluate effectiveness and further monitor safety, typically involving 20-300 patient volunteers.
4. Phase 3 trials test efficacy versus current treatments and closely monitor safety in 1,000-5,000 patients.
various measures for the measurement of outcome such as incidence prevalence and other drug us measures are briefly discussed here with suitable examples and equations
The document discusses several key points about determining appropriate drug doses and dosing intervals:
1) The starting dose and dosing interval aims to achieve a desirable therapeutic drug level in the body, based on pharmacokinetic parameters from literature.
2) For some drugs without full information, assumptions must be made based on available data.
3) The steady-state average blood concentration equation can be used to calculate multiple dose regimens to maintain levels in the therapeutic range.
4) Both dose and interval should be considered, as changing one affects peak and trough concentrations.
Therapeutic drug monitoring of organ transplantation drugsDr. Ramesh Bhandari
1) The document discusses the therapeutic drug monitoring of cyclosporine, an immunosuppressant commonly used following organ transplantation.
2) Cyclosporine has variable absorption and significant inter-patient variability requiring therapeutic drug monitoring to maintain trough concentrations between 100-400 mcg/L.
3) Factors like CYP3A inhibitors/inducers and foods can impact cyclosporine levels, requiring dosage adjustments to be made based on concentration monitoring.
Introduction to dosage regimen and Individualization of dosage regimenKLE College of pharmacy
Introduction of Dosage regimen, Approaches for design of dosage regimen, Individualization, Advantages, Dosage in neonates, Geriatrics, Renal and Hepatic impaired Patients.
This document discusses factors that contribute to variability in individual drug responses and the need to individualize drug dosing regimens. It outlines several key sources of variability, including age, body weight, gender, genetics, disease conditions, and drug interactions. For each factor, it provides examples of how that factor can influence the pharmacokinetics and pharmacodynamics of drugs and necessitate dosage adjustments tailored to the individual patient. The goal is to achieve effective therapy while avoiding toxicity by understanding and accounting for variability between patients.
The presentation gives you a bird eye's view regarding basics of PK-PD modeling, its applications, types, limitations and various softwares used for the same.
This document discusses drugs that can induce birth defects and the challenges of epidemiological research on this topic. It notes that 3-4% of live births experience major birth defects, and 40-90% of women consume at least one drug during pregnancy. Various drug classes like antibiotics, anticoagulants, NSAIDs, alcohol, and high-dose vitamin A are mentioned as potential teratogens. Methodological issues addressed include the rarity of specific birth defects requiring large sample sizes, recall bias in studies, and the need for cohort and case-control study designs. Solutions discussed involve different types of cohort studies and reviewing case reports to better understand adverse drug effects and design further research.
This document discusses dosing considerations for drugs in patients with renal impairment (uremia). Key points include:
- Chronic kidney disease affects over 50 million people worldwide and proper dosing of drugs is important in uremic patients.
- The kidneys play a key role in drug excretion, fluid/electrolyte balance, and waste removal. Impaired kidney function can impact drug dosing.
- Several methods exist for estimating appropriate dosing in uremic patients based on factors like the fraction of the drug excreted unchanged in urine and creatinine clearance as a marker of kidney function. Dose adjustment may involve decreasing the maintenance dose, increasing the dosing interval, or both.
This document discusses therapeutic drug monitoring (TDM) of drugs used to treat cardiovascular diseases, with a focus on digoxin. It provides details on the indications, pharmacokinetics, and appropriate use of TDM for digoxin including confirming toxicity, assessing factors that alter pharmacokinetics, addressing therapeutic failure, and ensuring medication compliance. The document also discusses dose adjustment and interpreting digoxin concentrations in the context of the clinical situation.
This document summarizes the key components of a clinical trial protocol. It discusses the types of clinical trials, phases of clinical trials, and the typical sections included in a protocol such as the title, objectives, study design, study population criteria, safety and efficacy assessments, statistics, and quality control plans. Protocols provide a formal design and plan for how a clinical trial will be conducted, managed, and reported.
Clinical trial designs can be categorized in several ways:
1. Based on the method used to allocate participants such as randomized controlled trials, non-randomized controlled trials, parallel group designs, crossover designs, and withdrawal designs.
2. Based on awareness of participants and researchers, such as blinded, unblinded, and double-blinded trials.
3. Based on the magnitude of activity being tested, such as superiority, inferiority, equality, and dose-response relationships.
Common trial types include pilot studies, which test experimental design on a small scale, and placebo-controlled trials, which compare an intervention to a placebo. Randomized controlled trials are considered the gold standard for assigning participants randomly to treatment or
The document discusses the key aspects and purpose of an investigator brochure (IB). The IB is prepared by the sponsor of a clinical trial to provide essential information about the investigational product to investigators. It contains a comprehensive summary of relevant non-clinical and clinical data, including information on pharmacology, toxicology, safety and efficacy from previous human trials. The goal is to inform investigators of risks and monitoring needs for the safe and proper conduct of the clinical trial. The IB is an important document that is reviewed annually and made available to investigators and ethics committees.
Introduction to Randomized control trial Drsnehas2
This document provides an overview of randomized controlled trials (RCTs). It defines RCTs as planned experiments where individuals are randomly assigned to experimental and control groups to assess the effect of a preventive or therapeutic measure. RCTs are considered the gold standard for epidemiological studies as they can provide the strongest evidence of causation. The document outlines the key aspects of RCTs, including categories (preventive and therapeutic trials), steps (from developing hypotheses to analysis), ethical considerations, randomization techniques, blinding/masking, and uses. RCTs aim to control for confounding factors and minimize bias through random assignment and blinding.
Extending A Trial’s Design Case Studies Of Dealing With Study Design IssuesnQuery
This document discusses several case studies of dealing with complex study design issues in clinical trials, including non-proportional hazards, cluster randomization, and three-armed trials. The agenda outlines topics on non-proportional hazards modeling and sample size considerations, cluster randomized and stepped-wedge designs, and methods for analyzing data from three-armed trials that include experimental, reference, and placebo groups. Worked examples are provided to illustrate sample size calculations and statistical approaches for each of these complex trial design scenarios.
Practical Methods To Overcome Sample Size ChallengesnQuery
Watch the video at: https://www.statsols.com/webinars/practical-methods-to-overcome-sample-size-challenges
In this webinar hosted by Ronan Fitzpatrick - Head of Statistics and nQuery Lead Researcher at Statsols - we will examine some of the most common practical challenges you will experience while calculating sample size for your study. These challenges will be split into two categories:
1. Overcoming Sample Size Calculation Challenges
(Survival Analysis Example)
We will examine practical methods to overcome common sample size calculation issues by focusing in on one of the more complex areas for sample size determination; Survival analysis. We will cover difficulties and potential issues surrounding challenges such as:
Drop Out: How to deal with expected dropouts or censoring. We compare the simple loss-to-follow-up method and integrating a dropout process into the sample size model?
Planning Uncertainty: How best to deal with the inevitable uncertainty at the planning stage? We examine how best to apply a sensitivity analysis and Bayesian approaches to explore the uncertainty in your sample size calculations.
Choosing the Effect Size: Various approaches and interpretations exist for how to find the effect size value. We examine those contrasting interpretations and determine the best method and also how to deal with parameterization options.
2. Overcoming Study Design Challenges
(Vaccine Efficacy Example)
The Randomised Controlled Trial (RCT) is considered the gold standard in trial design in drug development. However, there are often practical impediments which mean that adjustments or pragmatic approaches are needed for some trials and studies.
We will examine practical methods how to overcome common study design challenges and how these affect your sample size calculations. In this webinar, we will use common issues in vaccine study design to examine difficulties surrounding issues such as:
Case-Control Analysis: We will examine how to deal with study constraints and how to deal with analyses done during an observational study.
Alternative Randomization Methods: How best to address randomization in your vaccine trial design when full randomization is difficult, expensive or impractical. We examine how sample size calculations are affected with cluster or Mendelian randomization.
Rare Events: How does an outcome being rare affect the types of study design and statistical methods chosen in your study.
Bioavailability and Bioequivalence StudiesPranav Sopory
BA and BE studies.
Seminar presented in All India Institute of Medical Sciences (AIIMS - New Delhi).
Focus in Pharmacokinetic parameters (Cmax, AUC)
Single dose PK study, Steady state PK study, Modified drug release PK study, In vivo mechanisms, invitro mechanisms, Pharmacodynamic Study, Comparatice Clinical Trials. Biowavers and Biosimilimars.
Reference: CDSCO guideline, USFDA guideline, ICH guidelines
Clinical trials phases: Phase 0 to 4: An OverviewArchana Gawade
Clinical drug trials are conducted in phases:
1. Phase 0 involves micro-dosing to obtain preliminary pharmacokinetic data with minimal risk.
2. Phase 1 studies assess safety and side effects in healthy volunteers to determine safe dosage levels.
3. Phase 2 studies evaluate effectiveness and further monitor safety, typically involving 20-300 patient volunteers.
4. Phase 3 trials test efficacy versus current treatments and closely monitor safety in 1,000-5,000 patients.
various measures for the measurement of outcome such as incidence prevalence and other drug us measures are briefly discussed here with suitable examples and equations
The document discusses several key points about determining appropriate drug doses and dosing intervals:
1) The starting dose and dosing interval aims to achieve a desirable therapeutic drug level in the body, based on pharmacokinetic parameters from literature.
2) For some drugs without full information, assumptions must be made based on available data.
3) The steady-state average blood concentration equation can be used to calculate multiple dose regimens to maintain levels in the therapeutic range.
4) Both dose and interval should be considered, as changing one affects peak and trough concentrations.
Therapeutic drug monitoring of organ transplantation drugsDr. Ramesh Bhandari
1) The document discusses the therapeutic drug monitoring of cyclosporine, an immunosuppressant commonly used following organ transplantation.
2) Cyclosporine has variable absorption and significant inter-patient variability requiring therapeutic drug monitoring to maintain trough concentrations between 100-400 mcg/L.
3) Factors like CYP3A inhibitors/inducers and foods can impact cyclosporine levels, requiring dosage adjustments to be made based on concentration monitoring.
Introduction to dosage regimen and Individualization of dosage regimenKLE College of pharmacy
Introduction of Dosage regimen, Approaches for design of dosage regimen, Individualization, Advantages, Dosage in neonates, Geriatrics, Renal and Hepatic impaired Patients.
This document discusses factors that contribute to variability in individual drug responses and the need to individualize drug dosing regimens. It outlines several key sources of variability, including age, body weight, gender, genetics, disease conditions, and drug interactions. For each factor, it provides examples of how that factor can influence the pharmacokinetics and pharmacodynamics of drugs and necessitate dosage adjustments tailored to the individual patient. The goal is to achieve effective therapy while avoiding toxicity by understanding and accounting for variability between patients.
The presentation gives you a bird eye's view regarding basics of PK-PD modeling, its applications, types, limitations and various softwares used for the same.
This document discusses drugs that can induce birth defects and the challenges of epidemiological research on this topic. It notes that 3-4% of live births experience major birth defects, and 40-90% of women consume at least one drug during pregnancy. Various drug classes like antibiotics, anticoagulants, NSAIDs, alcohol, and high-dose vitamin A are mentioned as potential teratogens. Methodological issues addressed include the rarity of specific birth defects requiring large sample sizes, recall bias in studies, and the need for cohort and case-control study designs. Solutions discussed involve different types of cohort studies and reviewing case reports to better understand adverse drug effects and design further research.
This document discusses dosing considerations for drugs in patients with renal impairment (uremia). Key points include:
- Chronic kidney disease affects over 50 million people worldwide and proper dosing of drugs is important in uremic patients.
- The kidneys play a key role in drug excretion, fluid/electrolyte balance, and waste removal. Impaired kidney function can impact drug dosing.
- Several methods exist for estimating appropriate dosing in uremic patients based on factors like the fraction of the drug excreted unchanged in urine and creatinine clearance as a marker of kidney function. Dose adjustment may involve decreasing the maintenance dose, increasing the dosing interval, or both.
This document discusses therapeutic drug monitoring (TDM) of drugs used to treat cardiovascular diseases, with a focus on digoxin. It provides details on the indications, pharmacokinetics, and appropriate use of TDM for digoxin including confirming toxicity, assessing factors that alter pharmacokinetics, addressing therapeutic failure, and ensuring medication compliance. The document also discusses dose adjustment and interpreting digoxin concentrations in the context of the clinical situation.
This document summarizes the key components of a clinical trial protocol. It discusses the types of clinical trials, phases of clinical trials, and the typical sections included in a protocol such as the title, objectives, study design, study population criteria, safety and efficacy assessments, statistics, and quality control plans. Protocols provide a formal design and plan for how a clinical trial will be conducted, managed, and reported.
Clinical trial designs can be categorized in several ways:
1. Based on the method used to allocate participants such as randomized controlled trials, non-randomized controlled trials, parallel group designs, crossover designs, and withdrawal designs.
2. Based on awareness of participants and researchers, such as blinded, unblinded, and double-blinded trials.
3. Based on the magnitude of activity being tested, such as superiority, inferiority, equality, and dose-response relationships.
Common trial types include pilot studies, which test experimental design on a small scale, and placebo-controlled trials, which compare an intervention to a placebo. Randomized controlled trials are considered the gold standard for assigning participants randomly to treatment or
The document discusses the key aspects and purpose of an investigator brochure (IB). The IB is prepared by the sponsor of a clinical trial to provide essential information about the investigational product to investigators. It contains a comprehensive summary of relevant non-clinical and clinical data, including information on pharmacology, toxicology, safety and efficacy from previous human trials. The goal is to inform investigators of risks and monitoring needs for the safe and proper conduct of the clinical trial. The IB is an important document that is reviewed annually and made available to investigators and ethics committees.
Introduction to Randomized control trial Drsnehas2
This document provides an overview of randomized controlled trials (RCTs). It defines RCTs as planned experiments where individuals are randomly assigned to experimental and control groups to assess the effect of a preventive or therapeutic measure. RCTs are considered the gold standard for epidemiological studies as they can provide the strongest evidence of causation. The document outlines the key aspects of RCTs, including categories (preventive and therapeutic trials), steps (from developing hypotheses to analysis), ethical considerations, randomization techniques, blinding/masking, and uses. RCTs aim to control for confounding factors and minimize bias through random assignment and blinding.
Extending A Trial’s Design Case Studies Of Dealing With Study Design IssuesnQuery
This document discusses several case studies of dealing with complex study design issues in clinical trials, including non-proportional hazards, cluster randomization, and three-armed trials. The agenda outlines topics on non-proportional hazards modeling and sample size considerations, cluster randomized and stepped-wedge designs, and methods for analyzing data from three-armed trials that include experimental, reference, and placebo groups. Worked examples are provided to illustrate sample size calculations and statistical approaches for each of these complex trial design scenarios.
Practical Methods To Overcome Sample Size ChallengesnQuery
Watch the video at: https://www.statsols.com/webinars/practical-methods-to-overcome-sample-size-challenges
In this webinar hosted by Ronan Fitzpatrick - Head of Statistics and nQuery Lead Researcher at Statsols - we will examine some of the most common practical challenges you will experience while calculating sample size for your study. These challenges will be split into two categories:
1. Overcoming Sample Size Calculation Challenges
(Survival Analysis Example)
We will examine practical methods to overcome common sample size calculation issues by focusing in on one of the more complex areas for sample size determination; Survival analysis. We will cover difficulties and potential issues surrounding challenges such as:
Drop Out: How to deal with expected dropouts or censoring. We compare the simple loss-to-follow-up method and integrating a dropout process into the sample size model?
Planning Uncertainty: How best to deal with the inevitable uncertainty at the planning stage? We examine how best to apply a sensitivity analysis and Bayesian approaches to explore the uncertainty in your sample size calculations.
Choosing the Effect Size: Various approaches and interpretations exist for how to find the effect size value. We examine those contrasting interpretations and determine the best method and also how to deal with parameterization options.
2. Overcoming Study Design Challenges
(Vaccine Efficacy Example)
The Randomised Controlled Trial (RCT) is considered the gold standard in trial design in drug development. However, there are often practical impediments which mean that adjustments or pragmatic approaches are needed for some trials and studies.
We will examine practical methods how to overcome common study design challenges and how these affect your sample size calculations. In this webinar, we will use common issues in vaccine study design to examine difficulties surrounding issues such as:
Case-Control Analysis: We will examine how to deal with study constraints and how to deal with analyses done during an observational study.
Alternative Randomization Methods: How best to address randomization in your vaccine trial design when full randomization is difficult, expensive or impractical. We examine how sample size calculations are affected with cluster or Mendelian randomization.
Rare Events: How does an outcome being rare affect the types of study design and statistical methods chosen in your study.
This document provides an overview of statistics used in meta-analysis. It discusses key concepts like odds ratios, relative risk, confidence intervals, heterogeneity, and fixed and random effects models. It also summarizes different types of meta-analyses including realist reviews, meta-narrative reviews, and network meta-analyses. Software for performing meta-analyses and potential pitfalls in systematic reviews are also briefly covered.
Sample Size: A couple more hints to handle it right using SAS and RDave Vanz
Andrii Artemchuk from Intego Group, a Ukrainian offshore staffing company, presented this power point to the audience at a phUSE conference in Frankfurt Germany in 2018 on SAS and R
This document discusses inferential statistics and provides examples to illustrate key concepts. Inferential statistics involves drawing conclusions about populations from sample data using probability and statistical testing. Common situations where inferential statistics are used include comparing differences between two or more samples, estimating population parameters from samples, and assessing correlations. Key steps involve defining a null hypothesis, choosing an appropriate statistical test based on the type of variable (qualitative or quantitative) and sample size, calculating a test statistic, determining the probability, and interpreting results to either reject or fail to reject the null hypothesis. Examples are provided to demonstrate applying concepts like hypothesis testing, choosing between tests, and interpreting outcomes.
The document discusses sample size determination for clinical and epidemiological research. It explains that proper sample size is important for validity, accuracy, and reliability of research findings. Key factors to consider in sample size calculations include the study objective, details of the intervention, outcomes, covariates, research design, and study subjects. Precision analysis and power analysis are two common approaches, with power analysis being most suitable for studies aiming to detect an effect. The document provides formulas and examples for calculating sample sizes for comparative and descriptive studies with both continuous and dichotomous outcomes. It also discusses the concepts of type I and II errors and their relationship to statistical power.
This document provides information and examples on calculating sample size for clinical studies. It discusses key factors that affect sample size calculation, including minimum important difference, standard deviation, power, type I and II errors, study design, dropout rate, and compliance. It provides step-by-step worked examples of calculating sample size for various hypothetical clinical studies. The document emphasizes that sample size calculation is important to ensure studies are adequately powered and conclusions are valid.
This document provides an overview of a presentation on study designs, fundamentals of interpretation, and research topics. It includes 10 learning objectives covering key concepts in clinical study design such as validity, bias, study hierarchies, and statistical measures. It also presents 10 self-assessment questions pertaining to examples of study designs including randomized controlled trials, meta-analyses, and institutional review boards. The introduction explains why pharmacists need knowledge of study design and interpretation for tasks involving drug information, evidence-based medicine, research, and educating others.
This document provides an overview of biostatistics. It defines biostatistics and discusses variables that can be studied, including discrete and continuous variables. It describes common software used for analysis and summarizes typical descriptive measures like mean, median, standard deviation, etc. The document outlines common types of comparisons between continuous and categorical variables, including t-tests, ANOVA, and chi-square tests. It also discusses concepts like alpha, beta, power, and cautions around hypothesis testing and interpreting statistical significance.
This document discusses bioavailability and bioequivalence studies. It provides details on key pharmacokinetic parameters like AUC, Cmax, and Tmax that are evaluated in bioequivalence studies to determine if a generic drug is equivalent to a brand name drug. The document outlines current bioequivalence requirements set by various regulatory agencies like FDA, Health Canada, and others. It also discusses study design considerations, statistical analysis methods, and validation of bioanalytical methods used to evaluate bioequivalence.
- The document discusses sample size considerations for biomarker discovery and validation studies, noting that at least 250 samples are needed even for testing a few biomarkers, and larger sample sizes of 500-1,000 are needed for testing more biomarkers or with lower disease prevalence.
- Simulations showed high risks of false positive results from random data when sample sizes were under 250, prevalence was below 12%, or more than 25 biomarkers were analyzed.
- Key factors influencing the likelihood of random positive results are the number of patients, prevalence of the disease, and number of biomarkers investigated. Larger patient cohorts, higher prevalence, and analyzing fewer biomarkers reduce the risks of false discoveries.
The document discusses concepts related to meta-analysis including covariates, subgroup analysis, and meta-regression. It provides examples and questions related to analyzing trials of vitamin D supplementation and mortality in institutionalized elderly people. Maximum follow-up is identified as a study-level covariate. Subgroup analyses should be specified a priori, and meta-regressions on patient-level covariates can be valid if the association is biologically plausible.
The document provides guidelines for improving statistical analysis and reporting in research manuscripts submitted for publication. It discusses key methodological concepts like measurement uncertainty, sampling uncertainty, p-values, confidence intervals, and assumptions of statistical tests. It provides special recommendations for different study designs including case reports, experiments, observational studies, and randomized trials. The document emphasizes presenting details of statistical methods, quantifying findings with precision measures, avoiding sole reliance on p-values, adjusting for confounding, and following reporting guidelines for randomized trials.
BA-BE Bio-availability and Bio-equivalencyDr. Jigar Vyas
This document discusses bioavailability and bioequivalence testing. It defines key terms like bioavailability, pharmaceutical equivalents, bioequivalence and provides details on important pharmacokinetic parameters used to assess bioequivalence like AUC, Cmax and Tmax. It describes the goals and requirements of bioequivalence studies according to regulatory agencies like FDA. It also summarizes study design considerations and statistical analysis methods used to determine bioequivalence between test and reference products.
This document discusses sample size estimation and determination. It begins by defining what a sample is and why sample size is important. It describes factors that affect sample size, such as desired level of accuracy and precision. Several methods for calculating sample size are presented, including formulas for cross-sectional, case-control, and comparative studies using both qualitative and quantitative variables. Considerations like power, effect size, and study design are discussed. Examples are provided to demonstrate how to use formulas and tables to estimate sample size for different study designs.
This presentation is aimed at presenting the issues associated with subgroup analyses in clinical trials: the different types of subgroup analyses and the statistical issues associated with the conduct of subgroup analyses.
Biostatistics_Unit_II_Research Methodology & Biostatistics_M. Pharm (Pharmace...RAHUL PAL
This document provides an overview of biostatistics topics including parametric and non-parametric statistical tests, sample size calculation, and factors influencing sample size. It discusses commonly used parametric tests like the t-test, ANOVA, correlation coefficient, and regression analysis. Non-parametric tests like the Wilcoxon rank-sum test are also covered. The importance of considering sample size, factors that can impact it, and how dropouts are handled are summarized as well.
The document provides guidance on improving the chances of getting a manuscript accepted for publication. It discusses key methodological concepts like uncertainty of measurement and sampling, statistical assumptions, multiplicity, and proper reporting of different study types including case reports, experiments, observational studies, and randomized trials. The key recommendations are to 1) clearly describe statistical methods and sample sizes, 2) present both data and interpretation of results considering clinical and statistical significance, and 3) properly adjust for confounding and comply with reporting standards for different study designs.
Similar to Non-inferiority and Equivalence Study design considerations and sample size (20)
About the webinar
Flexible Clinical Trial Design | Survival, Stepped-Wedge & MAMS Designs
As clinical trials increase in complexity, the requirement is for trial designs to adapt to these complications.
From dealing with non-proportional hazards in survival analysis to creating seamless Phase II/III clinical trials, it is an exciting time to be involved in clinical trial design and analysis.
In this free webinar, we will explore a select few topics that highlight the additional flexibility available when designing modern clinical trials.
In this free webinar you will learn about:
Flexible Survival Analysis Designs
Non-proportional hazards and other complex survival curves have become of increasing interest, due to being commonly seen in immunotherapy development. This has led to interest in assessing the robustness of standard methods and alternative methods that better adapt to deviations.
In this webinar, we will look at power analysis assuming complex survival curves and the weighted log-rank test as one candidate model to deal with a delayed survival effect.
Stepped-Wedge designs
Cluster-randomized designs are often adopted when there is a high risk of contamination if cluster members were randomized individually. Stepped-wedge designs are useful in cases where it is difficult to apply a particular treatment to half of the clusters at the same time.
In this webinar, we will introduce stepped-wedge designs and provide an insight into the more complex, flexible randomization schedules available.
Multi-Arm Multi-Stage (MAMS)
MAMs designs provide the ability to assess more treatments in less time than could be done with a series of two-arm trials and can offer smaller sample size requirements when compared to that required for the equivalent number of two-arm trials.
In this webinar, we will look at the design of a Group Sequential MAMS design and explore its design requirements.
Duration - 60 minutes
Speaker: Ronan Fitzpatrick, Head of Statistics, Statsols
For more webinars check out https://www.statsols.com/webinars
Optimizing Oncology Trial Design FAQs & Common IssuesnQuery
Optimizing Oncology Trial Design - FAQs & Common Issues
In this free webinar you will learn about
Endpoints
Models
Covariates, stratification and censoring issues
Adaptive design - complications and opportunities
Sample size determination
& more
In this free webinar we will offer guidance on how to optimize your oncology trial design. Specifically, we will examine the frequently asked questions and common issues that arise.
https://www.statsols.com/webinar/optimizing-oncology-trial-design
Designing studies with recurrent events | Model choices, pitfalls and group s...nQuery
In this free webinar, we will examine the important design considerations for analyzing recurring events and counts.
Watch the webinar at: https://www.statsols.com/en/webinar/designing-studies-with-recurrent-events
Designing studies with recurrent events (Model choices, pitfalls and group sequential design)
2020 trends in biostatistics what you should know about study design - slid...nQuery
2020 Trends In Biostatistics - What you should know about study design.
In this free webinar you will learn about:
-Adaptive designs in confirmatory trials
-Using external data in study planning
-Innovative designs in early-stage trials
To watch the full webinar:
https://www.statsols.com/webinar/2020-trends-in-biostatistics-what-you-should-know-about-study-design
Webinar slides- alternatives to the p-value and power nQuery
What are the alternatives to the p-value & power? What is the next step for sample size determination? We will explore these issues in this free webinar presented by nQuery
Webinar slides how to reduce sample size ethically and responsiblynQuery
[Webinar] How to reduce sample size...ethically and responsibly | In this free webinar, you will learn various design strategies to help reduce the sample size of your study in an ethical and responsible manner. Practical examples will be used throughout.
Sample size for survival analysis - a guide to planning successful clinical t...nQuery
Determining the appropriate number of events needed for survival analysis is a complex task as study planners try to predict what sample size will be needed after accounting for the complications of unequal follow-up, drop-out and treatment crossover.
The statistical, logistical and ethical considerations all complicate life for biostatisticians as issues to balance in planning a survival analysis. However, this complexity has created a need for new analyses and procedures to help the planning process for survival analysis trials.
The wider move from fixed to flexible designs has opened up opportunities for advanced methods such as adaptive design and Bayesian analysis to help deal with the unique complications of planning for survival data but these methods have their own complications that need to be explored too.
Webinar slides sample size for survival analysis - a guide to planning succ...nQuery
Determining the appropriate number of events needed for survival analysis is a complex task as study planners try to predict what sample size will be needed after accounting for the complications of unequal follow-up, drop-out and treatment crossover.
The statistical, logistical and ethical considerations all complicate life for biostatisticians as issues to balance in planning a survival analysis. However, this complexity has created a need for new analyses and procedures to help the planning process for survival analysis trials.
The wider move from fixed to flexible designs has opened up opportunities for advanced methods such as adaptive design and Bayesian analysis to help deal with the unique complications of planning for survival data but these methods have their own complications that need to be explored too.
Innovative Strategies For Successful Trial Design - Webinar SlidesnQuery
Full webinar available here: https://www.statsols.com/webinar/innovative-strategies-for-successful-trial-design
[Webinar] Innovative Strategies For Successful Trial Design- In this free webinar, you will learn about:
- The challenges facing your trials
- How to calculate the correct sample size
- Worked examples including Mixed/Hierarchical Models
- Posterior Error
- Adaptive Designs For Survival
www.statsols.com
Innovative sample size methods for adaptive clinical trials webinar web ver...nQuery
View the video here:
https://www.statsols.com/webinar/innovative-sample-size-methods-for-adaptive-clinical-trials
Given the high failure rates and the increased costs of clinical trials, researchers need innovative design strategies to best optimize financial resources and reduce the risk to patients.
Adaptive designs are emerging as a way to reduce risk and cost associated with clinical trials. The FDA recently published guidance (Innovative Cures Act) and are actively encouraging sponsors to use Adaptive trials.
Adaptive design is a clinical trial design that allows adaptations or modifications to aspects of the trial after its initiation without undermining the validity and integrity of the trial.
In this webinar, Ronan will demonstrate nQuery's new Adaptive module focusing on Sample Size Re-Estimation & Group-Sequential Design.
In this webinar you will learn about:
The pros and cons of adaptive designs
Sample Size Re-Estimation
Group-Sequential Design
Conditional Power
Predictive Power
5 essential steps for sample size determination in clinical trials slidesharenQuery
In this free webinar hosted by nQuery Researcher & Statistician Eimear Keyes, we map out the 5 essential steps for sample size determination in clinical trials. At each step, Eimear will highlight the important function it plays and how to avoid the errors that will negatively impact your sample size determination and therefore your study.
Watch the Video: https://www.statsols.com/webinar/the-5-essential-steps-for-sample-size-determination
Innovative Sample Size Methods For Clinical Trials nQuery
"Innovative Sample Size Methods for Clinical Trials" is hosted to coincide with the Spring 2018 update to nQuery - The leading Sample Size Software.
Hosted by Ronan Fitzpatrick - Head of Statistics and nQuery Lead Researcher at Statsols - you'll learn about the benefits of a range of procedures and how you can implement them into your work:
1) Dose-escalation with the Bayesian Continual Reassessment Method
CRM is a growing alternative to the 3+3 method for Phase I trials finding the Maximum Tolerated Dose (MTD).
See how researchers can overcome 3+3 drawbacks to easily find the required sample size for this beneficial alternative for finding the MTD.
2) Bayesian Assurance with Survival Example
This Bayesian alternative to power has experienced a rapid rise in interest and application from researchers.
See how Assurance is being used by researchers to discover the true “probability of success” of a trial.
3) Mendelian Randomization
Mendelian randomization (MR) is a method that allows testing of a causal effect from observational data in the presence of confounding factors.
However, in order to design efficient Mendelian randomization studies, it is essential to calculate the appropriate sample sizes required. We demonstrate what to do to achieve this.
4) Negative Binomial Distribution
Negative binomial model has been increasingly used to model the count data. One of the challenges of applying negative binomial model in clinical trial design is the sample size estimation.
We demonstrate how best to determine the appropriate sample size in the presence of challenges such as unequal follow-up or dispersion.
Power and sample size calculations for survival analysis webinar SlidesnQuery
This webinar presentation introduced sample size determination for survival analysis. It discussed how to estimate the appropriate sample size, key considerations for survival analysis including expected survival curves and handling dropouts. It demonstrated an example in nQuery software to calculate the sample size needed for a clinical trial to show a risk reduction in progression-free survival between treatment arms. The webinar concluded with plans to further enhance survival analysis capabilities in nQuery and addressed questions from participants.
Bayesian Assurance: Formalizing Sensitivity Analysis For Sample SizenQuery
Title: Bayesian Assurance: Formalizing Sensitivity Analysis For Sample Size
Duration: 60 minutes
Speaker: Ronan Fitzpatrick, Head of Statistics, Statsols
Watch Here: http://bit.ly/2ndRG4B
In this webinar you’ll learn about:
Benefits of Sensitivity Analysis: What does the researcher gain by conducting a sensitivity analysis?
Why isn't Sensitivity Analysis formalized: Why does sensitivity analysis still lack the type of formalized rules and grounding to make it a routine part of sample size determination in every field?
How Bayesian Assurance works: Using Bayesian Assurance provides key contextual information on what is likely to happen over the total range possible values rather than the small number of fixed points used in a sensitivity analysis
Elicitation & SHELF: How expert opinion is elicited and then how to integrate these opinions with each other plus prior data using the Sheffield Elicitation Framework (SHELF)
Why use in both Frequentist or Bayesian analysis: How and why these methods can be used for studies which will use Frequentist or Bayesian methods in their final analysis
Plus more
Bayesian Approaches To Improve Sample Size WebinarnQuery
Title: Bayesian Approaches To Improve Sample Size
Duration: 60 minutes
Speaker: Ronan Fitzpatrick, Head of Statistics, Statsols
In this webinar you'll learn about:
Bayesian Sample Size Determination: See how the growth of Bayesian analysis has helped transform our ideas about statistical inference and methodologies in clinical trials
Bayesian Assurance: Get an informative answer on how likely it is to see a “positive” outcome from the trial and then make better decisions on what trials to back
Posterior Credible Intervals and Mixed Bayesian Likelihood: Enable researchers to use prior information from pilot studies and other sources to make quicker and better decisions
Plus much more
Minimizing Risk In Phase II and III Sample Size CalculationnQuery
[ Watch Webinar: http://bit.ly/2thIgmi ]. In this free webinar, Head of Statistics at Statsols, Ronan Fitzpatrick, addresses the issues of reducing risk in Phase II/III sample size calculations. Topics covered will include:
Sample Size Determination For Different Trial Designs
Bayesian Sample Size Determination
Sample Size For Survival Analysis
& more
Remote Sensing and Computational, Evolutionary, Supercomputing, and Intellige...University of Maribor
Slides from talk:
Aleš Zamuda: Remote Sensing and Computational, Evolutionary, Supercomputing, and Intelligent Systems.
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Inter-Society Networking Panel GRSS/MTT-S/CIS Panel Session: Promoting Connection and Cooperation
https://www.etran.rs/2024/en/home-english/
What is greenhouse gasses and how many gasses are there to affect the Earth.moosaasad1975
What are greenhouse gasses how they affect the earth and its environment what is the future of the environment and earth how the weather and the climate effects.
hematic appreciation test is a psychological assessment tool used to measure an individual's appreciation and understanding of specific themes or topics. This test helps to evaluate an individual's ability to connect different ideas and concepts within a given theme, as well as their overall comprehension and interpretation skills. The results of the test can provide valuable insights into an individual's cognitive abilities, creativity, and critical thinking skills
The use of Nauplii and metanauplii artemia in aquaculture (brine shrimp).pptxMAGOTI ERNEST
Although Artemia has been known to man for centuries, its use as a food for the culture of larval organisms apparently began only in the 1930s, when several investigators found that it made an excellent food for newly hatched fish larvae (Litvinenko et al., 2023). As aquaculture developed in the 1960s and ‘70s, the use of Artemia also became more widespread, due both to its convenience and to its nutritional value for larval organisms (Arenas-Pardo et al., 2024). The fact that Artemia dormant cysts can be stored for long periods in cans, and then used as an off-the-shelf food requiring only 24 h of incubation makes them the most convenient, least labor-intensive, live food available for aquaculture (Sorgeloos & Roubach, 2021). The nutritional value of Artemia, especially for marine organisms, is not constant, but varies both geographically and temporally. During the last decade, however, both the causes of Artemia nutritional variability and methods to improve poorquality Artemia have been identified (Loufi et al., 2024).
Brine shrimp (Artemia spp.) are used in marine aquaculture worldwide. Annually, more than 2,000 metric tons of dry cysts are used for cultivation of fish, crustacean, and shellfish larva. Brine shrimp are important to aquaculture because newly hatched brine shrimp nauplii (larvae) provide a food source for many fish fry (Mozanzadeh et al., 2021). Culture and harvesting of brine shrimp eggs represents another aspect of the aquaculture industry. Nauplii and metanauplii of Artemia, commonly known as brine shrimp, play a crucial role in aquaculture due to their nutritional value and suitability as live feed for many aquatic species, particularly in larval stages (Sorgeloos & Roubach, 2021).
ANAMOLOUS SECONDARY GROWTH IN DICOT ROOTS.pptxRASHMI M G
Abnormal or anomalous secondary growth in plants. It defines secondary growth as an increase in plant girth due to vascular cambium or cork cambium. Anomalous secondary growth does not follow the normal pattern of a single vascular cambium producing xylem internally and phloem externally.
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...Ana Luísa Pinho
Functional Magnetic Resonance Imaging (fMRI) provides means to characterize brain activations in response to behavior. However, cognitive neuroscience has been limited to group-level effects referring to the performance of specific tasks. To obtain the functional profile of elementary cognitive mechanisms, the combination of brain responses to many tasks is required. Yet, to date, both structural atlases and parcellation-based activations do not fully account for cognitive function and still present several limitations. Further, they do not adapt overall to individual characteristics. In this talk, I will give an account of deep-behavioral phenotyping strategies, namely data-driven methods in large task-fMRI datasets, to optimize functional brain-data collection and improve inference of effects-of-interest related to mental processes. Key to this approach is the employment of fast multi-functional paradigms rich on features that can be well parametrized and, consequently, facilitate the creation of psycho-physiological constructs to be modelled with imaging data. Particular emphasis will be given to music stimuli when studying high-order cognitive mechanisms, due to their ecological nature and quality to enable complex behavior compounded by discrete entities. I will also discuss how deep-behavioral phenotyping and individualized models applied to neuroimaging data can better account for the subject-specific organization of domain-general cognitive systems in the human brain. Finally, the accumulation of functional brain signatures brings the possibility to clarify relationships among tasks and create a univocal link between brain systems and mental functions through: (1) the development of ontologies proposing an organization of cognitive processes; and (2) brain-network taxonomies describing functional specialization. To this end, tools to improve commensurability in cognitive science are necessary, such as public repositories, ontology-based platforms and automated meta-analysis tools. I will thus discuss some brain-atlasing resources currently under development, and their applicability in cognitive as well as clinical neuroscience.
Or: Beyond linear.
Abstract: Equivariant neural networks are neural networks that incorporate symmetries. The nonlinear activation functions in these networks result in interesting nonlinear equivariant maps between simple representations, and motivate the key player of this talk: piecewise linear representation theory.
Disclaimer: No one is perfect, so please mind that there might be mistakes and typos.
dtubbenhauer@gmail.com
Corrected slides: dtubbenhauer.com/talks.html
7. Non-inferiority & equivalence about if
new trt. similar to existing trt.
•Common in generics & medical devices
Non-inferiority: Not Inferior to Control
•Direct effect measure w/ “good” direction
•Need NI margin below which is “inferior”
Equivalence: Equivalent to Control
•Commonly indirect effect measure w/ no
“good” direction e.g. bioequivalence
•CI to fall between lower & upper limits
Background
Source: C Pater (2004)
9. Non-inferiority Testing
Non-inferiority testing is where hypothesis test that
treatment no worse than standard by a specified margin
Select non-inferiority margin based on expertise & data
FDA: Fixed fraction (M2) of active control effect (M1)
Very common for generics or medical devices and usually
compare treatment vs control (e.g. RLD) w/o placebo
Most often used for continuous outcome (parallel or
cross-over) but available for proportions, survival, counts
10. “Calculation of the sample size was based on a
margin of non-inferiority for in-segment late luminal
loss of 0.16 mm. This value is equal to 35 percent of
an assumed mean (±SD) late luminal loss of
0.46±0.45 mm in diabetic patients after the
implantation of a sirolimus stent, as found in an
analysis of a series of diabetic patients treated with
sirolimus stents at participating centers in the 10
months that preceded the initiation of the study.
Using a one-sided α level of 0.05, we estimated that
99 patients per group were needed to demonstrate
noninferiority of the paclitaxel stent with a statistical
power of 80 percent. Expecting that up to 20
percent of the patients would not return for follow-
up coronary angiography, we included 250 patients
in the study.” Source: A. Dibra et. al. (2005)
Parameter Value
Significance Level (1-sided) 0.05
Expected Difference 0
Non-Inferiority Margin -0.16
Standard Deviation 0.45
Power 80%
Dropout Rate 20%
Worked Example 1
11. Non-inferiority Discussion
Size of the NI margin can take into account other
considerations other than standard trt. effect size
•Safety profile, secondary endpoints, easier administration
•But in general, conservative NI margin is encouraged (FDA)
Strong assumption for 2-trt. design that standard trt.
effect size retained from its approval (assay sensitivity)
•May need to replicate previous study conditions very closely
•May need additional evidence/data for regulatory approval
Note closely related “Superiority by Margin” hypothesis
12. Three Armed Trials
Have Experiment (A), Reference (R) & Placebo (P) groups
• Direct evaluation of assay sensitivity (“gold standard”)
• Concurrent placebo only allowable if it is ethical to do so
Need to test H1(a): E/R > P and then H1(b) E > NIM
Can simplify to a “ratio of differences” test: (E-P)/(R-P) > θ
Framework of Wald-type test for retention of effect
Can use same approach for means, props, survival, rates
Can also find optimal allocation for given alternative
13. Three Armed Trials
1 Means (Homoscedatic) Pigeot et al. (2003)
2 Means (Heteroscedatic) Hasler et al. (2008)
3 Proportions Kieser and Friede (2007)
4 Survival/Time-to-Event Mielke et al. (2009)
5 Counts/Rates (Poisson) Mielke and Munk (2009)
6 Counts/Rates (Negative Binomial) Mütze et al. (2016)
7 Non-Parametric Mütze et al. (2016)
14. Worked Example 2
Parameter Value
Significance Level (1-Sided) 0.025
Experimental Arm Mean 1.56
Reference Arm Mean 1.56
Placebo Arm Mean 0
Non-inferiority Ratio 0.5
Common Standard Deviation 2.5
Power 80%
Allocation Proportion (E:R:P) 0.38:0.38:0.24
“It was assumed that the placebo-adjusted effect for
both treatment groups was 1.56% and that the
placebo-adjusted effect for the oral rsCT tablets
must be at least 0.5 times the placebo-adjusted
effect for the ssCT nasal spray for the study to
demonstrate the non-inferiority of the oral rsCT
tablets to the ssCT nasal spray. Thus we wished to
have 95% confidence that the oral tablets were not
less than one-half as effective as nasal spray.
Assuming an SD of 2.5%, power of 80%, and a two-
sided 5% level of significance, it was determined
that approximately 133 patients were required for
each of the active treatment groups and 84 patients
were needed for the placebo treatment group.”
16. Test if treatment equivalent to Control
•Bioequivalence (Cmax, AUC) tests common
•But widely used for direct measures too
Method: “Two One-sided Tests” (TOST)
•H0: ΔTrue< ΔL or ΔTrue > ΔU, H1: ΔL< ΔTrue < ΔU
•Test both null hypotheses at one-sided α
•NB: Type I error is equal to one-sided α
But TOST ≈ Confidence Interval Method
•2 x TOST α = Confidence Level of Interval
•For example: 0.05 TOST α = 90% Interval
•Other approaches proposed but not widely
used (Lindley, Berger, Westlake)
Equivalence Testing
Source: lesslikely.com
Source: CMBJ, Impax Labs
17. Definition of equivalence and which effect(s)/measure(s) to use?
Average, Individual, Population Equivalence; Which of AUC, Cmax, Tmax?
Equivalence Issues
Cross-over trials common for bioequivalence but can do others
2x2 is “classic” but replicates common 2x3,2x4; William’s Designs if 3/4 trt
Bioequivalence bounds often from regulator, otherwise expertise
Most Common: 0.8-1.25 for GMR (AUC) but issues if NTID or HVD
Be aware of issues/reqs. with highly variable and NTID drugs
Different reqs from FDA/EMA/others: Bounds from CV, Replicate designs…
18. “The sample size for the study was determined with
reference to the relevant, recent literature available
on the pharmacokinetics of sildenafil, in particular
the results of a study conducted after
administration of two 25 mg capsules of Viagra film-
coated tablets in a population of 12 male subjects.
The highest coefficient of variance for the
pharmacokinetic parameters Cmax and AUC was
estimated to be 0.383 … Fixing the significance level
α at 5% and the hypothesized test/reference mean
ratio to 1, 50 subjects were considered sufficient to
attain a power of 80% to correctly conclude the
bioequivalence between the two formulations
within the range 80.00%–125.00% for all
parameters (Cmax and AUC).”
Source:
nejm.org
Parameter Value
Significance Level 0.05
Lower Equivalence Limit 0.8
Upper Equivalence Limit 1.25
Mean Ratio 1
Coefficient of Variation 0.383
Power (%) 80
Worked Example 3
Source: Radicioni M et al (2016)
19. Discussion and Conclusions
NI & Equivalence test if new trt. similar to standard trt.
Non-inferiority = “No worse than”; Equivalence = “Equal to”
NI if direct monotonic effect & can “redo” std. trt. trial
NI margin requires careful consideration, cost-benefit balance
Three arm trials since have direct comparison to placebo
Flexible framework available but only if ethical to give placebo
Equivalence if trt. is “equivalent” on “indirect” effect
Bioequivalence typical use-case (AUC, Cmax) but beware issues
21. nQuery Summer 2020 Release
The Summer 2020 (v8.6) release adds 26 new tables
to nQuery across multiple areas
MAMS
MCP-MOD
Phase II Group
Sequential Tests
for Proportions
(Fleming’s Design)
GST + SSR
Cluster
Randomized
Stepped-Wedge
Designs
Survival/
Time-to-Event
Trials
Confidence
Intervals for
Proportions
Three Armed Trials
Non-inferiority
24. The solution for optimizing clinical trials
PRE-CLINICAL
/ RESEARCH
EARLY PHASE
CONFIRMATORY
POST MARKETING
Animal Studies
ANOVA / ANCOVA
1000+ Scenarios for Fixed Term,
Adaptive & Bayesian Methods
Survival, Means, Proportions &
Count endpoints
Sample Size Re-Estimation
Group Sequential Trials
Bayesian Assurance
Cross over & personalized medicine
CRM
MCP-Mod
Simon’s Two Stage
Fleming’s GST
Cohort Study
Case-control Study
25. References
Senn, S. (2002). Cross-over trials in clinical research (2nd Edition). John Wiley & Sons.
Pater, C. (2004). Equivalence and noninferiority trials–are they viable alternatives for registration of
new drugs?(III). Current controlled trials in cardiovascular medicine, 5(1), 8.
Food and Drug Administration Non-inferiority clinical trials to establish effectiveness. Guidance for
industry. November 2016. https://www.fda.gov/downloads/Drugs/Guidances/UCM202140.pdf
Blackwelder, W.C., 2002. Showing a Treatment Is Good Because It Is Not Bad: When Does
‘Noninferiority’ Imply Effectiveness?. Control Clinical Trials, 23, pp. 52–54.
Chow, S.C., Shao, J., 2006. On Non-Inferiority Margin and Statistical Tests in Active Control Trial.”
Statistics in Medicine, 25, pp. 1101–1113.
Fleming, T.R., 2008. Current Issues in Non-inferiority Trials. Statistics in Medicine, 27, pp. 317-332.
Althunian, T.A., de Boer, A., Groenwold, R.H. and Klungel, O.H., 2017. Defining the noninferiority margin
and analysing noninferiority: an overview. British journal of clinical pharmacology, 83(8), pp.1636-1642.
Dibra, A., et al (2005). Paclitaxel-eluting or sirolimus-eluting stents to prevent restenosis in diabetic
patients. New England Journal of Medicine, 353(7), 663-670.
26. References
I. Pigeot, J. Schäfer, J. Röhmel, D. Hauschke., 2003. Assessing non-inferiority of a new treatment in a
three-arm clinical trial including a placebo. Statistics in Medicine, 22, pp. 883-899.
M. Kieser, T. Friede., 2007. Planning and analysis of three‐arm non‐inferiority trials with binary
endpoints. Statistics in Medicine, 26, pp. 253-273.
M. Hasler, R. Vonk, L.A. Hothorn., 2008. Assessing non-inferiority of a new treatment in a three-arm
trial in the presence of heteroscedasticity. Statistics in Medicine, 27, pp. 490-503.
M. Mielke, A. Munk, and A. Schacht., 2008. The assessment of non‐inferiority in a gold standard design
with censored, exponentially distributed endpoints. Statistics in Medicine, 27, pp. 5093-5110.
M. Mielke and A. Munk., 2009. The assessment and planning of non-inferiority trials for retention of
effect hypotheses-towards a general approach. arXiv:0912.4169
Mielke, M., 2010. Maximum Likelihood Theory for Retention of Effect Non-Inferiority Trials (Doctoral
dissertation, Niedersächsische Staats-und Universitätsbibliothek Göttingen).
T. Mütze, A. Munk, T. Friede., 2016. Design and analysis of three‐arm trials with negative binomially
distributed endpoints. Statistics in Medicine, 35, pp. 505-521.
27. References
T. Mütze, F. Konietschke, A. Munk, T. Friede., 2017, A studentized permutation test for three-arm trials in
the `gold standard’ design. Statistics in Medicine, 36, pp. 883-898.
Binkley, N., Bolognese, M., Sidorowicz‐Bialynicka, A., Vally, T., Trout, R., Miller, C., Buben, C.E., Gilligan, J.P.,
Krause, D.S. and Oral Calcitonin in Postmenopausal Osteoporosis (ORACAL) Investigators, 2012. A phase 3
trial of the efficacy and safety of oral recombinant calcitonin: the Oral Calcitonin in Postmenopausal
Osteoporosis (ORACAL) trial. Journal of bone and mineral research, 27(8), pp.1821-1829.
Food and Drug Administration Statistical approaches to establishing bioequivalence. Guidance for
industry. 2001. https://www.fda.gov/media/70958/download
European Medicines Agency, CHMP. Guideline on the Investigation of Bioequivalence. London; 2010 Jan
20.www.ema.europa.eu/docs/en_GB/document_library/Scientific_guideline/2010/01/WC500070039.pdf
Food and Drug Administration Draft Guidance on Progesterone. 2001.
https://www.accessdata.fda.gov/drugsatfda_docs/psg/Progesterone_caps_19781_RC02-11.pdf
Schuirmann DJ. A Comparison of the Two One-Sided Tests Procedure and the Power Approach for
Assessing the Equivalence of Average Bioavailability. J Pharmacokinet Biopharm. 1987; 15(6): 657–80.
Senn, S. (2001). Statistical issues in bioequivalence. Statistics in Medicine, 20, 2785-2799.
28. References
Kirkwood TBL. Bioequivalence testing—a need to rethink. Biometrics 1981; 37:589–591.
Berger R, Hsu J. Bioequivalence trials, intersection-union tests, and equivalence confidence sets.
Statistical Science 1996; 11:283–319
O’Quigley, J. and C. Baudoin, General approaches to the problem of bioequivalence. The Statistician,
1988. 37: p. 51-58.
Westlake WJ. Symmetrical confidence intervals for bioequivalence trials. Biometrics 1976; 32:741–744
Lindley DV. Decision analysis and bioequivalence trials. Statistical Science 1998; 13:136 –141.
Schütz, H., Reference-scaled Average Bioequivalence. Bebac https://bebac.at/lectures/Moscow2016-
3.pdf
Tóthfalusi L et al. Evaluation of the Bioequivalence of Highly-Variable Drugs and Drug Products. Pharm
Res. 2001;18(6): 728–33.
Radicioni, M., Castiglioni, C., Giori, A., Cupone, I., Frangione, V. and Rovati, S., 2017. Bioequivalence
study of a new sildenafil 100 mg orodispersible film compared to the conventional film-coated 100 mg
tablet administered to healthy male volunteers. Drug Design, Development and Therapy, 11, p.1183.