This document outlines strategies for conducting futility analyses in clinical trials with multiple interim analyses. It discusses motivations for futility rules, relevant statistical tools like conditional power and predictive power, and previous studies investigating futility rules based on these tools. The document proposes a theoretical framework for nonbinding futility rules using different statistical scales and numerical simulations in R to optimize the average sample size under the null hypothesis while controlling for power loss and sample size inflation.
CDISC's CDASH and SDTM: Why You Need Both!Kit Howard
CDISC's clinical data standards are widely used for clinical research, but many people wonder why there seem to be two standards for collected data: the Clinical Data Acquisition Standards Harmonization (CDASH) standard and the Study Data Tabulation Model (SDTM) standard. This poster steps through four significant reasons that reflect the differences in philosophy, intermediate goals and broad-scale uses. Examples illustrate each reason and how they affect your studies.
Interim Analysis of Clinical Trial Data: Implementation and Practical AdviceNAMSA
This document discusses interim analyses of clinical trial data. It describes different types of interim analyses including early stopping for safety or efficacy, sample size re-assessments, and administrative analyses. Early stopping for safety is generally done by a data monitoring committee to ensure participant safety. Early stopping for efficacy can determine whether a trial meets success criteria or shows futility. Sample size can be re-estimated based on nuisance parameters or treatment effects. Interim analyses are most useful when the alternative hypothesis is uncertain, enrollment is slow, or endpoints are acute. The document recommends doing some form of interim analysis or monitoring in almost all cases.
Non-inferiority and Equivalence Study design considerations and sample sizenQuery
About the webinar
This webinar examines the role of non-inferiority and equivalence in study design
In this free webinar, you will learn about:
-Regulatory information on this type of study design
-Considerations for study design and your sample size
-Practical worked examples of
--Non-inferiority Testing
--Equivalence Testing
Duration - 60 minutes
Speaker: Ronan Fitzpatrick, Head of Statistics, Statsols
Watch the video at: https://www.statsols.com/webinars
Presented at PhUSE 2013
The evaluation of efficacy in oncology studies, in particular for solid tumors, is pretty standard and well defined by several regulatory guidance (e.g. EMA and FDA), including some specific cancer type guidance (e.g. NSCLC from FDA).
Although some references will be also given for non-solid tumors, the paper will mainly focus on solid tumors efficacy
endpoints.
Overall Survival, Best Overall Response as per RECIST criteria, Progression Free Survival (PFS), Time to Progression (TTP), Best Overall Response Rate are some of the key efficacy indicators that will be discussed.
The document discusses vulnerable subjects in clinical research such as students, hospital employees, and minority groups. It defines Good Clinical Practice (GCP) as standards for designing, conducting, and reporting clinical trials to protect human subjects. The foundations of ethical clinical research are outlined, including the Nuremberg Code, Declaration of Helsinki, and Belmont Report, with a focus on principles of GCP like informed consent and minimizing risks to subjects.
According to FDA Draft Guidance for Industry in Electronic Submission and Study Data Technical Conformance Guide, the pharmaceutical companies will need to provide CDISC Electronic submission to FDA. The paper will explain Data Standard Catalog which will dictate FDA Standards. The paper will discuss how to prepare CDISC electronic submission and what to prepare in CDISC electronic submission.
CDISC's CDASH and SDTM: Why You Need Both!Kit Howard
CDISC's clinical data standards are widely used for clinical research, but many people wonder why there seem to be two standards for collected data: the Clinical Data Acquisition Standards Harmonization (CDASH) standard and the Study Data Tabulation Model (SDTM) standard. This poster steps through four significant reasons that reflect the differences in philosophy, intermediate goals and broad-scale uses. Examples illustrate each reason and how they affect your studies.
Interim Analysis of Clinical Trial Data: Implementation and Practical AdviceNAMSA
This document discusses interim analyses of clinical trial data. It describes different types of interim analyses including early stopping for safety or efficacy, sample size re-assessments, and administrative analyses. Early stopping for safety is generally done by a data monitoring committee to ensure participant safety. Early stopping for efficacy can determine whether a trial meets success criteria or shows futility. Sample size can be re-estimated based on nuisance parameters or treatment effects. Interim analyses are most useful when the alternative hypothesis is uncertain, enrollment is slow, or endpoints are acute. The document recommends doing some form of interim analysis or monitoring in almost all cases.
Non-inferiority and Equivalence Study design considerations and sample sizenQuery
About the webinar
This webinar examines the role of non-inferiority and equivalence in study design
In this free webinar, you will learn about:
-Regulatory information on this type of study design
-Considerations for study design and your sample size
-Practical worked examples of
--Non-inferiority Testing
--Equivalence Testing
Duration - 60 minutes
Speaker: Ronan Fitzpatrick, Head of Statistics, Statsols
Watch the video at: https://www.statsols.com/webinars
Presented at PhUSE 2013
The evaluation of efficacy in oncology studies, in particular for solid tumors, is pretty standard and well defined by several regulatory guidance (e.g. EMA and FDA), including some specific cancer type guidance (e.g. NSCLC from FDA).
Although some references will be also given for non-solid tumors, the paper will mainly focus on solid tumors efficacy
endpoints.
Overall Survival, Best Overall Response as per RECIST criteria, Progression Free Survival (PFS), Time to Progression (TTP), Best Overall Response Rate are some of the key efficacy indicators that will be discussed.
The document discusses vulnerable subjects in clinical research such as students, hospital employees, and minority groups. It defines Good Clinical Practice (GCP) as standards for designing, conducting, and reporting clinical trials to protect human subjects. The foundations of ethical clinical research are outlined, including the Nuremberg Code, Declaration of Helsinki, and Belmont Report, with a focus on principles of GCP like informed consent and minimizing risks to subjects.
According to FDA Draft Guidance for Industry in Electronic Submission and Study Data Technical Conformance Guide, the pharmaceutical companies will need to provide CDISC Electronic submission to FDA. The paper will explain Data Standard Catalog which will dictate FDA Standards. The paper will discuss how to prepare CDISC electronic submission and what to prepare in CDISC electronic submission.
CDISC is a non-profit organization that establishes clinical research data standards to support data acquisition, exchange, and submission. It has developed several standards including CDASH, which aims to standardize data collection fields across clinical trials to streamline data analysis and reduce errors. CDASH defines a set of common safety domains and variables that can be collected consistently across studies in a standardized way. This helps analyze data more efficiently, reduces training time for sites, and decreases potential errors from inconsistent data collection.
The document discusses several Trial Design domains from CDISC, including Trial Arms (TA), Trial Elements (TE), and Trial Visits (TS). It describes the key variables in each domain like ARMCD, ETCD, ELEMENT, EPOCH, VISITNUM, and start/end rules for trial elements and visits. The domains are used to represent the overall study design and plan without subject-level data.
The presentation is intended for Clinical Trial programmers or statisticians who are working on the solid tumor studies in oncology. There are three types of studies in oncology: Solid Tumor, Lymphoma and Leukemia. The solid tumor study usually follow RECIST (Response Evaluation Criteria in Solid Tumor) while Lymphoma follows Cheson and Leukemia follows study-specific criteria. The presentation will provide the brief introduction of RECIST 1.1 such as lesions (target, non target and new) and their selection criteria (size, number and etc). It will also discuss how the changes in tumor measurements will lead to responses (Complete Response, Partial Response, Stable Disease, Progression Disease and Not Evaluable).
Then, the presentation will introduce how RECIST 1.1 data are streamlined in CDISC – mainly in SDTM and ADaM. The presentation will introduce the new oncology SDTM domains - TU (Tumor Identification), TR (Tumor Results) and RS (Response) according to RECIST 1.1. The presentation will also show how ADaM datasets can be created for the tumor response evaluation and analysis in ORR (Objective Response Rate), PFS (Progression Free Survival) and TTP (Time to Progression).
Feasibility Solutions to Clinical Trial Nightmaresjbarag
Slow patient recruitment and poor retention cause recurrent nightmares and perpetual problems often resulting in missing recruitment milestones. The cost of these delays represents hundreds of thousands of dollars for drug and device developers. By recognizing this issue, early detailed feasibility can provide planning and contingency solutions that are focused on reducing the impact of delayed recruitment. Furthermore understanding what motivates investigators and patients to actively participate in clinical studies and how patient recruitment strategies and materials can support all stakeholders to complete studies on time are critical aspects of clinical study delivery planning.
During this presentation, an experienced Premier Research feasibility and patient recruitment specialist, reviewed feasibility approaches to address protocol evaluation as well as addressed influences on country selection, site distribution and patient recruitment strategies to provide for more effective clinical trial planning and conduct.
For more information, go to http://www.premier-research.com.
Planning And Development Of The Iss Ise Webinar FinalJay1818mar
This document provides a summary of a presentation on planning and developing integrated summaries of safety and efficacy data from multiple clinical trials. It discusses the purpose and requirements of integrated summaries, the planning process, special analysis considerations, and guidance documents. Key points covered include defining analysis populations and treatment groups, handling adverse events and laboratory data consistently across studies, and obtaining regulatory agency input on analysis plans.
This document summarizes key efficacy endpoints used in oncology clinical trials, including for solid tumors and non-solid tumors like acute myeloid leukemia. For solid tumors, the best overall response (BOR) is assessed using RECIST criteria to evaluate tumor shrinkage or progression based on target and non-target lesion measurements. Key time-to-event endpoints discussed include overall survival (OS), progression-free survival (PFS), and time to progression (TTP). For acute myeloid leukemia, response is assessed based on blood counts and bone marrow blast percentage according to International Working Group criteria, with endpoints like complete remission rate and event-free survival. Surrogate endpoints are also discussed.
Protocol and CRF in clinical trials.pptx445AmitPal
The document discusses the protocol and case report form (CRF) used in clinical trials. The protocol lays out the plan for the clinical trial, including who can participate, tests, procedures, medications, and length of study. It aims to protect participants' health and answer research questions. The CRF is used to collect standardized data across sites for a trial. It ensures accuracy, consistency, and completeness in data collection to help analyze results and answer the research hypotheses.
This document discusses CDISC standards for representing survival data from oncology clinical trials. It provides an overview of CDISC and describes the SDTM and ADaM domains that are useful for capturing efficacy endpoints involving survival, such as overall survival, progression-free survival and tumor response. Examples are given of how survival data from different patients would be represented in an ADTTE (Analysis Dataset for Time to Event) dataset according to CDISC ADaM standards.
The clinical trial process is one of the most critical and necessary steps for the development of all new drugs,
biologics or medical devices. Conducting clinical trials in Japan requires a delicate balancing act between having a
thorough understanding of the Japanese regulatory framework, as well as having an even much better
understanding of how clinical trials must be managed within the nuances and boundaries of the Japanese culture.
In clinical trials and other scientific studies, an interim analysis is an analysis of data that is conducted before data collection has been completed. If a treatment is particularly beneficial or harmful compared to the concurrent placebo group while the study is on-going, the investigators are ethically obliged to assess that difference using the data at hand and to make a deliberate consideration of terminating the study earlier than planned.
In interim analysis, whenever a new drug shows adverse effect on human being while testing the effectiveness of several drugs, we immediately stop the trial by taking into account the fact that maximum number of patients receive most effective treatment at the earliest stage. Interim analysis is also used to possibly reduce the expected number of patients and to shorten the follow-up time needed to make a conclusion. One wouldn't have to spend extra money if he/she already have enough evidence about the outcome. In this presentation, the total sample size is divided into four equal parts to perform the analysis and decision is made based on each individual step.
This document summarizes Angelo Tinazzi and Cedric Marchand's experience submitting clinical trial data to the FDA using CDISC standards. It describes their recent submission, including standards used, current status, and interaction with FDA reviewers. It also discusses requirements for electronic submissions and FDA feedback received from a test submission, including suggestions for SDTM content and define.xml. The presentation aims to help others in properly preparing FDA submissions using CDISC standards.
This document discusses several genomic tests for prostate cancer, including the Oncotype DX prostate cancer test, Prolaris test, and Decipher test. It provides information on what each test is, who it is for, what biological factors it measures, and how the results can help guide treatment decisions, especially for patients with low or intermediate risk prostate cancer considering active surveillance versus immediate treatment. The tests help improve risk stratification by incorporating individual tumor biology beyond standard clinical factors alone.
Adverse Events and Serious Adverse Events - Katalyst HLSKatalyst HLS
This document discusses adverse events and serious adverse events in clinical trials. It reviews FDA inspection findings related to reporting adverse events and the regulations surrounding adverse event reporting. It outlines how adverse events should be recorded, including source documentation and attribution. It also discusses reporting criteria and timelines for reporting adverse events to sponsors and regulatory bodies. Finally, it reviews considerations for auditing adverse events, such as whether events were properly graded and reported.
This document provides an overview of clinical trial design. It discusses the typical phases of clinical trials including:
- Phase I which focuses on safety and dose escalation
- Phase II which screens for therapeutic activity and further evaluates toxicity
- Phase III which uses a proper control group to further evaluate efficacy and monitors long-term safety
It also describes various study designs including randomized controlled trials, parallel designs, cross-over designs, and cohort studies. Key aspects of each design like advantages, disadvantages, and implementation are covered. The document provides a comprehensive yet concise primer on clinical trial methodology.
This document provides guidance on starting ADaM specification development and dataset programming. It recommends starting with ADaM subject matter experts and a well-defined specification template. It also recommends understanding the SDTM datasets, analysis keys, and Occurrence Data Structure requirements. The document outlines considerations like variable attributes and traceability when developing specifications and programming datasets. It emphasizes adhering to the ADaM Implementation Guide.
A Systematic Review of ADaM IG InterpretationAngelo Tinazzi
The document summarizes a systematic review of publications about the implementation of the ADaM model. Over 100 papers were identified that discussed ADaM implementation, with the majority coming from CRO authors. Several areas of interpretation in the ADaM guidelines were identified from the literature, including how to classify parameters in BDS, derive rows versus columns, and determine what constitutes an "analysis-ready" dataset. The review concluded that feedback from users would help the CDISC team further develop and clarify the ADaM guidelines.
Handling Third Party Vendor Data_Katalyst HLSKatalyst HLS
The document discusses handling third party vendor data in clinical trials. It covers four types of external data including safety laboratory data, PK/PD data, pharmacogenetics data, and device data. Centralized vendors provide standardized testing across sites and electronic transfer of data to minimize errors. Data reconciliation involves generating discrepancy reports using primary keys like sponsor ID, study ID, and subject ID, and secondary keys like date of birth. Queries are raised to sites or vendors to resolve inconsistencies between third party and clinical trial databases.
We present recent advances and statistical developments for evaluating Dynamic Treatment Regimes (DTR), which allow the treatment to be dynamically tailored according to evolving subject-level data. Identification of an optimal DTR is a key component for precision medicine and personalized health care. Specific topics covered in this talk include several recent projects with robust and flexible methods developed for the above research area. We will first introduce a dynamic statistical learning method, adaptive contrast weighted learning (ACWL), which combines doubly robust semiparametric regression estimators with flexible machine learning methods. We will further develop a tree-based reinforcement learning (T-RL) method, which builds an unsupervised decision tree that maintains the nature of batch-mode reinforcement learning. Unlike ACWL, T-RL handles the optimization problem with multiple treatment comparisons directly through a purity measure constructed with augmented inverse probability weighted estimators. T-RL is robust, efficient and easy to interpret for the identification of optimal DTRs. However, ACWL seems more robust against tree-type misspecification than T-RL when the true optimal DTR is non-tree-type. At the end of this talk, we will also present a new Stochastic-Tree Search method called ST-RL for evaluating optimal DTRs.
2008 JSM - Meta Study Data vs Patient DataTerry Liao
Hsini (Terry) Liao, Ph.D., Yun Lu, Hong Wang, “Comparison of Individual Patient-Level and Study-Level Meta-Analyses Using time to Event Analysis in Drug-Eluting Stent Data”, Abstract No 301037, Joint Statistical Meetings, Session No 90, Denver, CO, August 2008
CDISC is a non-profit organization that establishes clinical research data standards to support data acquisition, exchange, and submission. It has developed several standards including CDASH, which aims to standardize data collection fields across clinical trials to streamline data analysis and reduce errors. CDASH defines a set of common safety domains and variables that can be collected consistently across studies in a standardized way. This helps analyze data more efficiently, reduces training time for sites, and decreases potential errors from inconsistent data collection.
The document discusses several Trial Design domains from CDISC, including Trial Arms (TA), Trial Elements (TE), and Trial Visits (TS). It describes the key variables in each domain like ARMCD, ETCD, ELEMENT, EPOCH, VISITNUM, and start/end rules for trial elements and visits. The domains are used to represent the overall study design and plan without subject-level data.
The presentation is intended for Clinical Trial programmers or statisticians who are working on the solid tumor studies in oncology. There are three types of studies in oncology: Solid Tumor, Lymphoma and Leukemia. The solid tumor study usually follow RECIST (Response Evaluation Criteria in Solid Tumor) while Lymphoma follows Cheson and Leukemia follows study-specific criteria. The presentation will provide the brief introduction of RECIST 1.1 such as lesions (target, non target and new) and their selection criteria (size, number and etc). It will also discuss how the changes in tumor measurements will lead to responses (Complete Response, Partial Response, Stable Disease, Progression Disease and Not Evaluable).
Then, the presentation will introduce how RECIST 1.1 data are streamlined in CDISC – mainly in SDTM and ADaM. The presentation will introduce the new oncology SDTM domains - TU (Tumor Identification), TR (Tumor Results) and RS (Response) according to RECIST 1.1. The presentation will also show how ADaM datasets can be created for the tumor response evaluation and analysis in ORR (Objective Response Rate), PFS (Progression Free Survival) and TTP (Time to Progression).
Feasibility Solutions to Clinical Trial Nightmaresjbarag
Slow patient recruitment and poor retention cause recurrent nightmares and perpetual problems often resulting in missing recruitment milestones. The cost of these delays represents hundreds of thousands of dollars for drug and device developers. By recognizing this issue, early detailed feasibility can provide planning and contingency solutions that are focused on reducing the impact of delayed recruitment. Furthermore understanding what motivates investigators and patients to actively participate in clinical studies and how patient recruitment strategies and materials can support all stakeholders to complete studies on time are critical aspects of clinical study delivery planning.
During this presentation, an experienced Premier Research feasibility and patient recruitment specialist, reviewed feasibility approaches to address protocol evaluation as well as addressed influences on country selection, site distribution and patient recruitment strategies to provide for more effective clinical trial planning and conduct.
For more information, go to http://www.premier-research.com.
Planning And Development Of The Iss Ise Webinar FinalJay1818mar
This document provides a summary of a presentation on planning and developing integrated summaries of safety and efficacy data from multiple clinical trials. It discusses the purpose and requirements of integrated summaries, the planning process, special analysis considerations, and guidance documents. Key points covered include defining analysis populations and treatment groups, handling adverse events and laboratory data consistently across studies, and obtaining regulatory agency input on analysis plans.
This document summarizes key efficacy endpoints used in oncology clinical trials, including for solid tumors and non-solid tumors like acute myeloid leukemia. For solid tumors, the best overall response (BOR) is assessed using RECIST criteria to evaluate tumor shrinkage or progression based on target and non-target lesion measurements. Key time-to-event endpoints discussed include overall survival (OS), progression-free survival (PFS), and time to progression (TTP). For acute myeloid leukemia, response is assessed based on blood counts and bone marrow blast percentage according to International Working Group criteria, with endpoints like complete remission rate and event-free survival. Surrogate endpoints are also discussed.
Protocol and CRF in clinical trials.pptx445AmitPal
The document discusses the protocol and case report form (CRF) used in clinical trials. The protocol lays out the plan for the clinical trial, including who can participate, tests, procedures, medications, and length of study. It aims to protect participants' health and answer research questions. The CRF is used to collect standardized data across sites for a trial. It ensures accuracy, consistency, and completeness in data collection to help analyze results and answer the research hypotheses.
This document discusses CDISC standards for representing survival data from oncology clinical trials. It provides an overview of CDISC and describes the SDTM and ADaM domains that are useful for capturing efficacy endpoints involving survival, such as overall survival, progression-free survival and tumor response. Examples are given of how survival data from different patients would be represented in an ADTTE (Analysis Dataset for Time to Event) dataset according to CDISC ADaM standards.
The clinical trial process is one of the most critical and necessary steps for the development of all new drugs,
biologics or medical devices. Conducting clinical trials in Japan requires a delicate balancing act between having a
thorough understanding of the Japanese regulatory framework, as well as having an even much better
understanding of how clinical trials must be managed within the nuances and boundaries of the Japanese culture.
In clinical trials and other scientific studies, an interim analysis is an analysis of data that is conducted before data collection has been completed. If a treatment is particularly beneficial or harmful compared to the concurrent placebo group while the study is on-going, the investigators are ethically obliged to assess that difference using the data at hand and to make a deliberate consideration of terminating the study earlier than planned.
In interim analysis, whenever a new drug shows adverse effect on human being while testing the effectiveness of several drugs, we immediately stop the trial by taking into account the fact that maximum number of patients receive most effective treatment at the earliest stage. Interim analysis is also used to possibly reduce the expected number of patients and to shorten the follow-up time needed to make a conclusion. One wouldn't have to spend extra money if he/she already have enough evidence about the outcome. In this presentation, the total sample size is divided into four equal parts to perform the analysis and decision is made based on each individual step.
This document summarizes Angelo Tinazzi and Cedric Marchand's experience submitting clinical trial data to the FDA using CDISC standards. It describes their recent submission, including standards used, current status, and interaction with FDA reviewers. It also discusses requirements for electronic submissions and FDA feedback received from a test submission, including suggestions for SDTM content and define.xml. The presentation aims to help others in properly preparing FDA submissions using CDISC standards.
This document discusses several genomic tests for prostate cancer, including the Oncotype DX prostate cancer test, Prolaris test, and Decipher test. It provides information on what each test is, who it is for, what biological factors it measures, and how the results can help guide treatment decisions, especially for patients with low or intermediate risk prostate cancer considering active surveillance versus immediate treatment. The tests help improve risk stratification by incorporating individual tumor biology beyond standard clinical factors alone.
Adverse Events and Serious Adverse Events - Katalyst HLSKatalyst HLS
This document discusses adverse events and serious adverse events in clinical trials. It reviews FDA inspection findings related to reporting adverse events and the regulations surrounding adverse event reporting. It outlines how adverse events should be recorded, including source documentation and attribution. It also discusses reporting criteria and timelines for reporting adverse events to sponsors and regulatory bodies. Finally, it reviews considerations for auditing adverse events, such as whether events were properly graded and reported.
This document provides an overview of clinical trial design. It discusses the typical phases of clinical trials including:
- Phase I which focuses on safety and dose escalation
- Phase II which screens for therapeutic activity and further evaluates toxicity
- Phase III which uses a proper control group to further evaluate efficacy and monitors long-term safety
It also describes various study designs including randomized controlled trials, parallel designs, cross-over designs, and cohort studies. Key aspects of each design like advantages, disadvantages, and implementation are covered. The document provides a comprehensive yet concise primer on clinical trial methodology.
This document provides guidance on starting ADaM specification development and dataset programming. It recommends starting with ADaM subject matter experts and a well-defined specification template. It also recommends understanding the SDTM datasets, analysis keys, and Occurrence Data Structure requirements. The document outlines considerations like variable attributes and traceability when developing specifications and programming datasets. It emphasizes adhering to the ADaM Implementation Guide.
A Systematic Review of ADaM IG InterpretationAngelo Tinazzi
The document summarizes a systematic review of publications about the implementation of the ADaM model. Over 100 papers were identified that discussed ADaM implementation, with the majority coming from CRO authors. Several areas of interpretation in the ADaM guidelines were identified from the literature, including how to classify parameters in BDS, derive rows versus columns, and determine what constitutes an "analysis-ready" dataset. The review concluded that feedback from users would help the CDISC team further develop and clarify the ADaM guidelines.
Handling Third Party Vendor Data_Katalyst HLSKatalyst HLS
The document discusses handling third party vendor data in clinical trials. It covers four types of external data including safety laboratory data, PK/PD data, pharmacogenetics data, and device data. Centralized vendors provide standardized testing across sites and electronic transfer of data to minimize errors. Data reconciliation involves generating discrepancy reports using primary keys like sponsor ID, study ID, and subject ID, and secondary keys like date of birth. Queries are raised to sites or vendors to resolve inconsistencies between third party and clinical trial databases.
We present recent advances and statistical developments for evaluating Dynamic Treatment Regimes (DTR), which allow the treatment to be dynamically tailored according to evolving subject-level data. Identification of an optimal DTR is a key component for precision medicine and personalized health care. Specific topics covered in this talk include several recent projects with robust and flexible methods developed for the above research area. We will first introduce a dynamic statistical learning method, adaptive contrast weighted learning (ACWL), which combines doubly robust semiparametric regression estimators with flexible machine learning methods. We will further develop a tree-based reinforcement learning (T-RL) method, which builds an unsupervised decision tree that maintains the nature of batch-mode reinforcement learning. Unlike ACWL, T-RL handles the optimization problem with multiple treatment comparisons directly through a purity measure constructed with augmented inverse probability weighted estimators. T-RL is robust, efficient and easy to interpret for the identification of optimal DTRs. However, ACWL seems more robust against tree-type misspecification than T-RL when the true optimal DTR is non-tree-type. At the end of this talk, we will also present a new Stochastic-Tree Search method called ST-RL for evaluating optimal DTRs.
2008 JSM - Meta Study Data vs Patient DataTerry Liao
Hsini (Terry) Liao, Ph.D., Yun Lu, Hong Wang, “Comparison of Individual Patient-Level and Study-Level Meta-Analyses Using time to Event Analysis in Drug-Eluting Stent Data”, Abstract No 301037, Joint Statistical Meetings, Session No 90, Denver, CO, August 2008
This document discusses various statistical methods used in engineering. It covers topics like sample plans, capability studies, gauge R&R studies, comparative analysis, design of experiments (DOE), correlation, regression, reliability, and the DMAIC process in Six Sigma. DOE techniques like full factorial designs, fractional factorial designs, custom designs, evaluation of designs, response surface methods, and residuals are explained. The document provides examples and outlines the applications of these various statistical analysis methods.
- Simulations of clinical trial randomization methods showed consistent trade-offs between efficiency and unpredictability over different methods and parameters. No single best method optimized both metrics.
- Two metrics were used to evaluate predictability (potential for selection bias) and efficiency (loss of statistical power): simulations revealed clear trade-offs between higher predictability and lower efficiency.
- As sample size increased, most methods became more efficient while some also became more predictable and others less predictable, depending on the method. Permuted blocks, dynamic allocation, and complete randomization were among the methods evaluated.
EUGM 2011 | DARCHY | Deployment & use of east within sanofi r & dCytel USA
1. The document discusses the deployment and use of the East statistical software within Sanofi R&D. East is used primarily for designing and simulating group sequential trials.
2. While East has tools for monitoring trials and analyzing data, these tools are rarely used in practice at Sanofi. The analysis module cannot perform the intended stage-wise adjusted analysis.
3. The document raises several technical issues with East regarding extra interim looks, nuisance parameters, repeated confidence intervals, and new adaptive design tools. Clarification on these topics is needed.
Xi Zhang presented their Ph.D. dissertation which analyzed functional regression models and their application to high-frequency financial data. The presentation included:
1. An introduction to functional data analysis and the use of intraday cumulative return curves from stock price data.
2. A simulation study comparing predictive methods in functional autoregressive models, finding the estimated kernel method performed well.
3. An application of functional extensions of the Capital Asset Pricing Model to predict intraday return curves, finding simpler models with intercepts had better predictive performance than more complex models.
Version 8 of SigmaXL statistical software includes several new features that make multiple comparisons easier. It adds Analysis of Means charts for comparing normal, binomial, and Poisson distributions in one-way and two-way settings. It also improves multiple comparisons procedures for one-way ANOVA, adds tests for equal variances, improves chi-square tests and associations, and includes new descriptive statistics, templates, and calculators.
A Moment Inequality for Overall Decreasing Life Class of Life Distributions w...inventionjournals
:A moment inequality is derived for the system whose life distribution is in an overall decreasing life (ODL) class of life distributions. A new nonparametric test statistic for testing exponentiality against ODL is investigated based on this inequality. The asymptotic normality of the proposed statistic is presented. Pitman's asymptotic efficiency, power and critical values of this test are calculated to assess the performance of the test. Real examples are given to elucidate the use of the proposed test statistic in the reliability analysis. Wealso proposed a test for testing exponentiality versus ODL for right censored data and the power estimates of this test are also simulated for censored data for some commonly used distributions in reliability. Finally, real data are used as an example for practical problems.
This document summarizes a webinar presentation about adaptive sample size re-estimation for confirmatory time-to-event trials. The presentation discusses a motivating lung cancer trial example and introduces a promising zone design where the sample size is increased only if interim results fall within a promising zone. It demonstrates the design, simulation, and interim monitoring capabilities of East®SurvAdapt software. Key aspects of the adaptive design methodology are discussed, including conditional power calculations, maintaining type 1 error control, and balancing sample size increases with trial duration.
The document discusses the 2k factorial design, which is a special case of the general factorial design where k factors are each studied at two levels (usually labeled as low and high). The 2k factorial design is widely used in industrial experimentation and forms a basic building block for other experimental designs. Key aspects covered include orthogonality, estimating main effects and interactions between factors, ANOVA analysis to determine significant effects, and evaluating residuals to validate model assumptions.
We discuss a general roadmap for generating causal inference based on observational studies used to general real world evidence. We review targeted minimum loss estimation (TMLE), which provides a general template for the construction of asymptotically efficient plug-in estimators of a target estimand for realistic (i.e, infinite dimensional) statistical models. TMLE is a two stage procedure that first involves using ensemble machine learning termed super-learning to estimate the relevant stochastic relations between the treatment, censoring, covariates and outcome of interest. The super-learner allows one to fully utilize all the advances in machine learning (in addition to more conventional parametric model based estimators) to build a single most powerful ensemble machine learning algorithm. We present Highly Adaptive Lasso as an important machine learning algorithm to include.
In the second step, the TMLE involves maximizing a parametric likelihood along a so-called least favorable parametric model through the super-learner fit of the relevant stochastic relations in the observed data. This second step bridges the state of the art in machine learning to estimators of target estimands for which statistical inference is available (i.e, confidence intervals, p-values etc). We also review recent advances in collaborative TMLE in which the fit of the treatment and censoring mechanism is tailored w.r.t. performance of TMLE. We also discuss asymptotically valid bootstrap based inference. Simulations and data analyses are provided as demonstrations.
Presentation of 2 papers related to temporal graph pattern mining.
Lin, Fu-ren, et al. "Mining time dependency patterns in clinical pathways." International Journal of Medical Informatics 62.1 (2001): 11-25.
Liu, Chuanren, et al. "Temporal phenotyping from longitudinal electronic health records: A graph based framework." Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 2015.
Computational Pool-Testing with Retesting StrategyWaqas Tariq
Pool testing is a cost effective procedure for identifying defective items in a large population. It also improves the efficiency of the testing procedure when imperfect tests are employed. This study develops computational pool-testing strategy based on a proposed pool testing with re-testing strategy. Statistical moments based on this applied design have been generated. With advent of computers in 1980‘s, pool-testing with re-testing strategy under discussion is handled in the context of computational statistics. From this study, it has been established that re-testing reduces misclassifications significantly as compared to Dorfman procedure although re-testing comes with a cost i.e. increase in the number of tests. Re-testing considered improves the sensitivity and specificity of the testing scheme.
A PRACTICAL POWERFUL ROBUST AND INTERPRETABLE FAMILY OF CORRELATION COEFFICIE...Savas Papadopoulos, Ph.D
If we conducted a competition for which statistical quantity would be the most valuable in exploratory data analysis, the winner would most likely be the correlation coefficient with a significant difference from its first competitor. In addition, most data applications contain non-normal data with outliers without being able to be converted to normal data. Therefore, we search for robust correlation coefficients to nonnormality and/or outliers that could be applied to all applications and detect influenced or hidden correlations not recognized by the most popular correlation coefficients. We introduce a correlation-coefficient family with the Pearson and Spearman coefficients as specific cases. Other family members provide desirable lower p-values than those derived by the standard coefficients in the earlier problems. The proposed family of coefficients, their cut-off points, and p-values, computed by permutation tests, could be applied by all scientists analyzing data. We share simulations, code, and real data by email or the internet.
EUGM 2011 | JEHL | group sequential designs with 2 time to event endpointsCytel USA
This document discusses approaches for handling multiple time-to-event endpoints in group sequential clinical trial designs. It provides examples of hierarchical testing procedures where the secondary endpoint is only evaluated if the primary endpoint is significant. It also discusses approaches where trials are driven by both primary and secondary event types, with interim analyses planned for each endpoint. Maintaining control of the overall type I error rate across multiple analyses and endpoints is an important consideration.
- The document discusses factors to consider when determining sample size, such as objectives, sampling design, required accuracy, population variability, and practical constraints.
- It provides formulas for calculating sample sizes for different study designs, including simple random sampling, stratified sampling, and hypothesis testing of one or two proportions/means.
- An example calculates the needed sample size of 152 people in each group to test the hypothesis that drug A affects blood pressure, based on parameters from a pilot study.
- The document discusses the analysis of single factor experiments, including comparing two conditions or treatments, analysis of variance, and one-way layouts.
- It provides examples of hypothesis testing for single factor experiments including comparing two population means with both known and unknown variances.
- Guidelines are given for checking assumptions, interpreting results, and estimating model parameters for single factor experiments.
Similar to Strategies for setting futility analyses at multiple time points in clinical trials (20)
(June 12, 2024) Webinar: Development of PET theranostics targeting the molecu...Scintica Instrumentation
Targeting Hsp90 and its pathogen Orthologs with Tethered Inhibitors as a Diagnostic and Therapeutic Strategy for cancer and infectious diseases with Dr. Timothy Haystead.
The cost of acquiring information by natural selectionCarl Bergstrom
This is a short talk that I gave at the Banff International Research Station workshop on Modeling and Theory in Population Biology. The idea is to try to understand how the burden of natural selection relates to the amount of information that selection puts into the genome.
It's based on the first part of this research paper:
The cost of information acquisition by natural selection
Ryan Seamus McGee, Olivia Kosterlitz, Artem Kaznatcheev, Benjamin Kerr, Carl T. Bergstrom
bioRxiv 2022.07.02.498577; doi: https://doi.org/10.1101/2022.07.02.498577
Travis Hills of MN is Making Clean Water Accessible to All Through High Flux ...Travis Hills MN
By harnessing the power of High Flux Vacuum Membrane Distillation, Travis Hills from MN envisions a future where clean and safe drinking water is accessible to all, regardless of geographical location or economic status.
ESA/ACT Science Coffee: Diego Blas - Gravitational wave detection with orbita...Advanced-Concepts-Team
Presentation in the Science Coffee of the Advanced Concepts Team of the European Space Agency on the 07.06.2024.
Speaker: Diego Blas (IFAE/ICREA)
Title: Gravitational wave detection with orbital motion of Moon and artificial
Abstract:
In this talk I will describe some recent ideas to find gravitational waves from supermassive black holes or of primordial origin by studying their secular effect on the orbital motion of the Moon or satellites that are laser ranged.
Describing and Interpreting an Immersive Learning Case with the Immersion Cub...Leonel Morgado
Current descriptions of immersive learning cases are often difficult or impossible to compare. This is due to a myriad of different options on what details to include, which aspects are relevant, and on the descriptive approaches employed. Also, these aspects often combine very specific details with more general guidelines or indicate intents and rationales without clarifying their implementation. In this paper we provide a method to describe immersive learning cases that is structured to enable comparisons, yet flexible enough to allow researchers and practitioners to decide which aspects to include. This method leverages a taxonomy that classifies educational aspects at three levels (uses, practices, and strategies) and then utilizes two frameworks, the Immersive Learning Brain and the Immersion Cube, to enable a structured description and interpretation of immersive learning cases. The method is then demonstrated on a published immersive learning case on training for wind turbine maintenance using virtual reality. Applying the method results in a structured artifact, the Immersive Learning Case Sheet, that tags the case with its proximal uses, practices, and strategies, and refines the free text case description to ensure that matching details are included. This contribution is thus a case description method in support of future comparative research of immersive learning cases. We then discuss how the resulting description and interpretation can be leveraged to change immersion learning cases, by enriching them (considering low-effort changes or additions) or innovating (exploring more challenging avenues of transformation). The method holds significant promise to support better-grounded research in immersive learning.
Immersive Learning That Works: Research Grounding and Paths ForwardLeonel Morgado
We will metaverse into the essence of immersive learning, into its three dimensions and conceptual models. This approach encompasses elements from teaching methodologies to social involvement, through organizational concerns and technologies. Challenging the perception of learning as knowledge transfer, we introduce a 'Uses, Practices & Strategies' model operationalized by the 'Immersive Learning Brain' and ‘Immersion Cube’ frameworks. This approach offers a comprehensive guide through the intricacies of immersive educational experiences and spotlighting research frontiers, along the immersion dimensions of system, narrative, and agency. Our discourse extends to stakeholders beyond the academic sphere, addressing the interests of technologists, instructional designers, and policymakers. We span various contexts, from formal education to organizational transformation to the new horizon of an AI-pervasive society. This keynote aims to unite the iLRN community in a collaborative journey towards a future where immersive learning research and practice coalesce, paving the way for innovative educational research and practice landscapes.
ESR spectroscopy in liquid food and beverages.pptxPRIYANKA PATEL
With increasing population, people need to rely on packaged food stuffs. Packaging of food materials requires the preservation of food. There are various methods for the treatment of food to preserve them and irradiation treatment of food is one of them. It is the most common and the most harmless method for the food preservation as it does not alter the necessary micronutrients of food materials. Although irradiated food doesn’t cause any harm to the human health but still the quality assessment of food is required to provide consumers with necessary information about the food. ESR spectroscopy is the most sophisticated way to investigate the quality of the food and the free radicals induced during the processing of the food. ESR spin trapping technique is useful for the detection of highly unstable radicals in the food. The antioxidant capability of liquid food and beverages in mainly performed by spin trapping technique.
The binding of cosmological structures by massless topological defectsSérgio Sacani
Assuming spherical symmetry and weak field, it is shown that if one solves the Poisson equation or the Einstein field
equations sourced by a topological defect, i.e. a singularity of a very specific form, the result is a localized gravitational
field capable of driving flat rotation (i.e. Keplerian circular orbits at a constant speed for all radii) of test masses on a thin
spherical shell without any underlying mass. Moreover, a large-scale structure which exploits this solution by assembling
concentrically a number of such topological defects can establish a flat stellar or galactic rotation curve, and can also deflect
light in the same manner as an equipotential (isothermal) sphere. Thus, the need for dark matter or modified gravity theory is
mitigated, at least in part.
Authoring a personal GPT for your research and practice: How we created the Q...Leonel Morgado
Thematic analysis in qualitative research is a time-consuming and systematic task, typically done using teams. Team members must ground their activities on common understandings of the major concepts underlying the thematic analysis, and define criteria for its development. However, conceptual misunderstandings, equivocations, and lack of adherence to criteria are challenges to the quality and speed of this process. Given the distributed and uncertain nature of this process, we wondered if the tasks in thematic analysis could be supported by readily available artificial intelligence chatbots. Our early efforts point to potential benefits: not just saving time in the coding process but better adherence to criteria and grounding, by increasing triangulation between humans and artificial intelligence. This tutorial will provide a description and demonstration of the process we followed, as two academic researchers, to develop a custom ChatGPT to assist with qualitative coding in the thematic data analysis process of immersive learning accounts in a survey of the academic literature: QUAL-E Immersive Learning Thematic Analysis Helper. In the hands-on time, participants will try out QUAL-E and develop their ideas for their own qualitative coding ChatGPT. Participants that have the paid ChatGPT Plus subscription can create a draft of their assistants. The organizers will provide course materials and slide deck that participants will be able to utilize to continue development of their custom GPT. The paid subscription to ChatGPT Plus is not required to participate in this workshop, just for trying out personal GPTs during it.
11.1 Role of physical biological in deterioration of grains.pdf
Strategies for setting futility analyses at multiple time points in clinical trials
1. at multiple time points in clinical trials
Lu Mao, PhD Student
University of North Carolina at Chapel Hill
Supervisor: Paul Gallo, PhD
Strategies for Futility Analyses
2. OUTLINE
| Lu Mao | 08/14/2012 | Futility analyses | Business Use Only
2
Background for Futility Analysis
•Motivation
•B-value theory
•Conditional Power
•Predictive Power
Methods and Results
•Two futility looks
•Three futility looks
Software (R) Demonstration
Conclusion
Stop for Futility
3. BACKGROUND - MOTIVATION
| Lu Mao | 08/14/2012 | Futility analyses | Business Use Only
3
Traditional Design: fix the sample size and perform analyses after all subjects have been recruited.
Group Sequential Design: several interim analyses are conducted in the course of patient enrollment
•Safety
•Early determination of (in)efficacy
•Ethics
•Cost reduction
Futility Analysis: a group sequential design to allow for early termination of the trial when the likelihood of establishing efficacy at final stage is decide to be low.
4. BACKGROUND - MOTIVATION
An ideal futility analysis design:
1.Considerably curtails the length of the trial when there is no/negative effect
2.Do not substantially affect the operating characteristics (Type I and Type II errors) of the original fixed-sample design.
| Lu Mao | 08/14/2012 | Futility analyses | Business Use Only
4
H0
HA
Futility rules
Final Analysis
STOP
5. BACKGROUND - STATISTICAL TOOLS
5 | Lu Mao | 08/14/2012 | Futility analyses | Business Use Only
Group sequential trials – multiple testing on several
statistics based on the first m observations, m=n1,...,nk
Let the test statistics, e.g. z scores, be .
The rejection region R={|Zn|>z1-α/2}
If Z>0 means a positive effect for the treatment, we wish
to stop the trial for futility when the interim Z scores are
very low, e.g. set (zn1
,...,znk
), and at the i’th interim,
Needs to find the joint distribution of
( ,..., , )
n1 n n Z Z Z
k
( ,..., , )
n1 n n Z Z Z
k
ni ni Z z
Stop to
Accept H0
6. BACKGROUND - STATISTICAL TOOLS
Typically, the Z statistic calculated from the first m obs can
be expressed in terms of a partial sum statistic
in the form
where is an estimate of , the variance of .
Example: Two sample normal test
6 | Lu Mao | 08/14/2012 | Futility analyses | Business Use Only
m
m i i S
1
mˆ
S
Z m
m
0
1
0 2 ˆ
( )
2 ˆ
( )
m
X Y
m X Y
Z
m
i
i i
m
2 ˆ i
2
7. BACKGROUND - B-VALUE THEORY
Easier to find the joint distribution of by
the well known Brownian motion approximation to the
partial sums.
Indeed, if is the effect size (true state of nature),
where is related to the true state of nature and the overall
sample size, and is the fraction of sample size at
the j’th interim, call it information time. Obviously
7 | Lu Mao | 08/14/2012 | Futility analyses | Business Use Only
( ,..., , )
n1 n n S S S
k
1
,
1
1 1
1
1
1 1 1 1
1
1
1
1
1
k
k k k
d
n
i i
n
i i
n
i i
n
n
n
t t
t t t
t t t
t
t
N
n
S
S
S
n k
k
i E
n
t n n j k j j , 1,,
0 1. j t
8. BACKGROUND - B-VALUE THEORY
Call the left side of the previous formula the B values,
denoted .
We have seen that the B scores follow the marginal
distribution of a standard Brownian motion with drift term θ
The relationship between the Z scores and the B scores is
simple:
Or:
8 | Lu Mao | 08/14/2012 | Futility analyses | Business Use Only
( ( ), , ( ), (1)) 1 B t B t B k
( )
1
ˆ ˆ j
j
n
j j
n
n B t
n t
S
n
n
n
S
Z j j
j
j j n j B(t ) t Z
9. BACKGROUND - CONDITIONAL POWER
Conditional Power: quantifies the notion of likelihood of
success given the current data
Futility rule: stop at the j’th interim if
Since , we have
Therefore
9 | Lu Mao | 08/14/2012 | Futility analyses | Business Use Only
CP b P B z B t b j k j j j ( ; ) ( (1) | ( ) ; ), 1, , 1 / 2
( ( ); ) . j j CP B t
1
,
(1) 1
( )
j
j j j j
t
t t t
N
B
B t
j j j j B(1) B(t ), N B(t ) (1t ) , 1t
j
j j
j
t
z t b
CP b
1
(1 )
( ; ) 1 1 / 2
10. BACKGROUND - CONDITIONAL POWER
Conditional power based on the hypothesized 휃 does not allow the state of nature to adapt to the observed data.
Conditional power based on the current estimate 휃 푗
퐶푃푏푗;휃 푗≡푃퐵1>푧1− 훼 2 퐵푡푗=푏푗;휃 푗
Recall that 퐵푡푗∼푁푡푗휃,푡푗, we obtain 휃 푗=퐵(푡푗)/푡푗
퐶푃푏푗;휃 푗=1−Φ 푧1−훼2 −휃 푗1−푡푗−푏푗 1−푡푗 =1−Φ 푧1−훼2 −푏푗푡푗 1−푡푗
| Lu Mao | 08/14/2012 | Futility analyses | Business Use Only
10
11. BACKGROUND - PREDICTIVE POWER
Another way of allowing the state of nature to adapt to the observed data is (Bayesian) predictive power.
Predictive power – conditional power averaged over the posterior distribution of the state of nature:
푃푃푏푗;휋⋅=∫퐶푃푏푗;휃휋휃푏푗푑휃
Typically, we take the uniform prior:
휋휃∝1
After some derivation, we obtain
푃푃푏푗;휋⋅≡1=1−Φ 푡푗푧1−훼2 −푏푗푡푗 1−푡푗
| Lu Mao | 08/14/2012 | Futility analyses | Business Use Only
11
12. BACKGROUND - POWER FUNCTION
We have seen three ways of choosing the futility boundary (푏1,⋯,푏푘)
1.Conditional power: 퐶푃
2.Conditional power based on current estimate: 퐶푃(휃 )
3.Predictive power: 푃푃
Given the boundary, the power function is given by
Ψ휃≡푃Reject 퐻0휃
=P휃Btj≥푏푗,푗=1,⋯,푘,B1>z1−훼2
<푃휃퐵(1)>z1−훼2
| Lu Mao | 08/14/2012 | Futility analyses | Business Use Only
12
13. BACKGROUND - ERROR PROBABILITIES
Thus the power function is reduced when either 휃=0 or equals hypothesized effect size
In statistical language, both the type I error rate and power are decreased.
Power loss: a fraction of successful trials are terminated by the futility rule.
| Lu Mao | 08/14/2012 | Futility analyses | Business Use Only
13
14. BACKGROUND - PREVIOUS STUDIES
Chang et al. 2004
•Assesses one futility look at midway of trial based on conditional power;
•Shows that most power loss can be “reclaimed” by lowering the (final) critical value to achieve type I error rate exactly 훼;
(recall that for critical value 푧1−훼2 the overall type I error < 훼)
•Provides a graphical method of doing this
Lachin et al. 2005
•One futility look based on conditional power at midway
•Suggests an iterative algorithm of determining the critical value that achieves 훼 to regain power.
| Lu Mao | 08/14/2012 | Futility analyses | Business Use Only
14
15. BACKGROUND - PREVIOUS STUDIES
Snapinn et al. 2006
•Reviews conditional power approach to futility rule;
•Note the problem of reclaiming 훼: rules become binding, while it crosses the boundary it has to stop.
Emerson et al. 2005
•Considers CP, CP(휃 ), PP and various other scales in determining futility rules
•Argues that the scale used is less important than the resulted operating characteristics.
V. Shih & P. Gallo 2010
•Investigate power loss vs sample size reduction for one futility rule at midway based on CP, CP(휃 ), PP.
| Lu Mao | 08/14/2012 | Futility analyses | Business Use Only
15
16. METHODS – GENERAL
Rationale
•Nonbinding futility rule: no type I error reclaiming;
•ASN (average sample number) under 퐻0 is the optimization target, provided that other factors are controlled;
•To regain power, we may enlarge the sample size – sample size inflation (SI) is the control target;
•No power regaining – power loss is the control target.
Setting
•Multiple futility looks at arbitrary time points – For simplicity and practicality we only consider two and three looks at evenly spaced interims;
•Different scales (CP, CP(휃 ), PP, etc ) of setting futility rules.
| Lu Mao | 08/14/2012 | Futility analyses | Business Use Only
16
17. METHODS – GENERAL
Aim
•Setting up a theoretical framework for nonbinding futility rules;
•Comparing numerical results using different scales CP, CP(휃 ) and PP;
•Develop easy-to-implement program, based on efficient numerical algorithms, to allow the user to choose different scales, values of the scale, and setup of interim time points.
Facilities
•Most numerical analyses are done in R; a few in MatLab;
•We make use of the R package mvtnorm for multivariate normal distribution function evaluations.
| Lu Mao | 08/14/2012 | Futility analyses | Business Use Only
17
18. METHODS – SAMPLE SIZE INFLATION
Sample size inflation (SI) approach
•The trade-off is between SI factor and ASN (average sample size under 퐻0).
•Recall that 휃=푛훿/휎; if we find the 휃푆퐼 that achieves power 1−훽, then since the hypothesized effect size is known, we obtain 푛;
•Since in the fixed sample test 휃0=푧1−훼2 +푧1−훽, we have the sample size inflation factor:
푅= 휃푆퐼 푧1−훼2 +푧1−훽 2
•By definition 휃푆퐼=Ψ−1(1−훽); by monotonicity, use bisection method to obtain 휃푆퐼;
•Note that the inflation factor 푅 is NOT dependent on the hypothesized effect size.
| Lu Mao | 08/14/2012 | Futility analyses | Business Use Only
18
19. METHODS – POWER LOSS
Power loss approach:
•Without SI to regain power;
•We look at equal
-CP
-CP(휃 )
-PP -Equal power loss (Power loss at tj: 푙푗휃=푃휃(퐵푡푖≥푏푖,푖=1⋯,푗− 1,퐵푡푗<푏푗,퐵1>푧1−훼2 ))
at two and three evenly spaced interims.
•The trade-off is between power loss and ASN
•We also assess the optimal rules (the rules that result in minimum ASN given certain power loss)
-Optimization done by grid search
| Lu Mao | 08/14/2012 | Futility analyses | Business Use Only
19
20. RESULTS – SAMPLE SIZE INFLATION
| Lu Mao | 08/14/2012 | Futility analyses | Business Use Only
20
SI factor
•To achieve 1−훽=0.9 based on equal conditional power 훾 at 푘=1,⋯,5 evenly spaced interims
•when 훾≤0.5, inflation is in fact negligible (≤1.1), indicating great practicality in applying these non-binding futility rules through SI.
SI
Conditional power 훾
k
0.2
0.3
0.4
0.5
0.6
0.7
0.8
1
1.002
1.007
1.018
1.038
1.073
1.129
1.221
2
1.006
1.016
1.033
1.063
1.106
1.175
1.282
3
1.009
1.022
1.045
1.078
1.130
1.201
1.314
4
1.011
1.029
1.054
1.092
1.145
1.222
1.337
5
1.015
1.033
1.061
1.102
1.159
1.237
1.355
21. RESULTS – SAMPLE SIZE INFLATION
ASN
•Same setting as the previous table
•There is substantial reduction of sample size for k=2, 3, 훾≤0.5 , a practical range of futility looks with tolerable SI.
| Lu Mao | 08/14/2012 | Futility analyses | Business Use Only
21
ASN
Conditional power 훾
k
0.2
0.3
0.4
0.5
0.6
0.7
0.8
1
0.823
0.766
0.722
0.691
0.675
0.674
0.696
2
0.756
0.715
0.678
0.647
0.622
0.609
0.609
3
0.716
0.677
0.648
0.622
0.604
0.588
0.582
4
0.693
0.657
0.628
0.606
0.589
0.578
0.571
5
0.680
0.643
0.615
0.595
0.580
0.569
0.563
22. RESULTS – SAMPLE SIZE INFLATION
| Lu Mao | 08/14/2012 | Futility analyses | Business Use Only
22
Additional issue with SI
•Power=1−훽 at hypothesized effect size, by construction;
•Need to assess the global power behavior, especially those near the hypothesized 훿
•The right figure shows that the power curves for the “SI”ed futility design are almost indistinguishable from those of the fixed sample (reference) design when 훿≥0.5∗(designed 훿)
23. RESULTS – POWER LOSS
Two futility looks
| Lu Mao | 08/14/2012 | Futility analyses | Business Use Only
23
Power loss
Equal CP
Equal CP(휃 )
Equal PP
Equal power loss
Optimal
0.002
0.751
0.769
0.724
0.722
0.717
0.003
0.726
0.727
0.696
0.692
0.689
0.005
0.700
0.692
0.663
0.665
0.661
0.007
0.674
0.658
0.636
0.640
0.635
0.010
0.648
0.627
0.611
0.615
0.610
0.015
0.620
0.596
0.584
0.591
0.584
0.021
0.591
0.565
0.557
0.567
0.557
0.029
0.562
0.537
0.532
0.544
0.532
0.041
0.532
0.508
0.506
0.520
0.505
24. RESULTS – POWER LOSS
Equal PP is the closest to the optimal bound over all;
Equal power loss approximates the optimal bound only when power loss is very small.
| Lu Mao | 08/14/2012 | Futility analyses | Business Use Only
24
25. RESULTS – POWER LOSS
Three futility looks
| Lu Mao | 08/14/2012 | Futility analyses | Business Use Only
25
Power loss
Equal CP
Equal CP(휃 )
Equal PP
Equal power loss
Optimal
0.003
0.708
0.738
0.680
0.673
0.661
0.005
0.683
0.708
0.644
0.641
0.631
0.007
0.659
0.678
0.618
0.620
0.611
0.010
0.635
0.637
0.593
0.594
0.584
0.014
0.610
0.600
0.563
0.569
0.559
0.020
0.584
0.564
0.536
0.548
0.534
0.028
0.556
0.528
0.508
0.522
0.507
0.037
0.527
0.495
0.482
0.500
0.481
0.051
0.495
0.462
0.454
0.476
0.453
26. RESULTS – POWER LOSS
Overall, equal PP performs best of all, again; less satisfactory for small power loss;
Similarly, equal power loss is doing well for small power losses but not so for greater ones.
| Lu Mao | 08/14/2012 | Futility analyses | Business Use Only
26
28. RESULTS – OPTIMAL BOUNDS
Optimal bounds for three futility looks (precision 0.01)
| Lu Mao | 08/14/2012 | Futility analyses | Business Use Only
28
Power loss
푧1(훾1)
푧2(훾2)
푧3(훾3)
푧4
ASN
0.003
-1.13(0.457)
-0.09(0.284)
0.63(0.112)
1.96
0.661
0.005
-0.98(0.491)
0.00(0.316)
0.79(0.174)
1.96
0.631
0.007
-0.86(0.519)
0.08(0.345)
0.84(0.201)
1.96
0.611
0.010
-0.66(0.565)
0.16(0.375)
0.86(0.209)
1.96
0.584
0.014
-0.49(0.603)
0.21(0.394)
0.90(0.229)
1.96
0.559
0.020
-0.38(0.627)
0.34(0.444)
0.94(0.250)
1.96
0.534
0.028
-0.22(0.662)
0.43(0.480)
0.94(0.251)
1.96
0.507
0.037
-0.08(0.691)
0.48(0.500)
1.05(0.312)
1.96
0.481
0.051
0.02(0.711)
0.67(0.575)
1.10(0.346)
1.96
0.453
29. RESULTS – DISCUSSION
Equal conditional power is not a good idea for futility rule at multiple time points;
Intuitively, the conditional power does not adapt to the observed data as it moves along;
The same conditional at a later point means drastically different things comparing to an earlier point, if the early data are already contradicting the hypothesized 휃.
Allowing the state of nature to adapt is probably the reason for the success of equal PP.
Note that our findings SHOULD NOT be taken to mean that the idea of conditional power is bad.
| Lu Mao | 08/14/2012 | Futility analyses | Business Use Only
29
30. RESULTS – DISCUSSION
Compare the bounds:
•This is the futility bounds for power loss 0.01;
•Equal PP and optimal bounds coincide very well;
•Comparing to the optimal, equal CP is conservative in the beginning and aggressive in the end; equal CP(휃 ) is aggressive in the beginning and conservative in the end.
| Lu Mao | 08/14/2012 | Futility analyses | Business Use Only
30
31. DEMONSTRATION OF fut()
| Lu Mao | 08/14/2012 | Futility analyses | Business Use Only
31
In addition to 훼 and 훽, allows the user to choose interim time points, the corresponding conditional (predictive) power, the scale (CP, CP(휃 ), PP), and whether you want SI.
Example:
•훼=0.05,훽=0.1
•Two futility looks at one third and one half of the sample size
•Use predictive power 0.2 and 0.3 respectively
•No sample size inflation to regain power
32. DEMONSTRATION OF fut()
•Use the summary() function to print out the details:
| Lu Mao | 08/14/2012 | Futility analyses | Business Use Only
32
33. DEMONSTRATION OF fut()
•Plot the boundary:
>plot(D)
| Lu Mao | 08/14/2012 | Futility analyses | Business Use Only
33
•Plot the power function
>powerplot(D)
34. CONCLUSIONS
We have established an SI framework to non-binding futility rule with uncompromised power;
We have shown that in realistic situations (푘=2,3), equal PP across the time points results in approximately optimal (in terms of ASN) bounds.
We have developed an easy-to-use R program fut() for design of nonbinding futility rules.
| Lu Mao | 08/14/2012 | Futility analyses | Business Use Only
34
35. THANK YOU FOR YOUR ATTENTION!
QUESTIONS
| Lu Mao | 08/14/2012 | Futility analyses | Business Use Only
35