This document summarizes a probability model for evaluating long-term effects and overdiagnosis of lung cancer screening using CT scans. The model categorizes individuals into 4 groups: symptom-free-life, no-early-detection, true-early-detection, and overdiagnosis. It derives the probability of each group outcome for people with a history of one or more prior screens using data from the National Lung Screening Trial. The probabilities account for factors like screening test sensitivity and the distribution of lifetimes. The model can evaluate long-term outcomes and risks of overdiagnosis from regular lung cancer screening.
This document outlines a study that aims to evaluate long-term outcomes of periodic cancer screening, including the inference of overdiagnosis. It describes using a probability model and simulation based on data from the HIP study to derive the probability of each long-term outcome: symptom-free life, no early detection, true early detection, and overdiagnosis. The key outcomes are defined and equations are provided to calculate the probability of each outcome based on factors like screening sensitivity, sojourn time in preclinical and clinical states, and a person's lifetime as a random variable. The methodology aims to investigate the chance of overdiagnosis from continued screening and evaluate long-term effects for an entire screened cohort.
basic lecture on literature types, importance of primary literature (papers,article) , study designs, and organization of scientific paper. p value and assessment of a new test is additional topic.
Ana Marusic - MedicReS World Congress 2011MedicReS
Four clinical trials (Trials A-D) tested active treatments against placebo for about 5 years. Trial A reported survival rates, Trial B reported risk reduction, Trial C reported mortality reduction, and Trial D reported number needed to treat. Clinicians considered Trials B and D most useful for practice based on how the results were reported. Reporting guidelines recommend presenting numbers of events, absolute risk reductions, relative risks with confidence intervals, and number needed to treat to improve interpretation and clinical applicability of trial results. Adopting reporting standards can enhance transparency and reliability of research literature.
The document discusses adjuvant chemotherapy for early-stage (T1ab) breast cancer in 2014. It summarizes several retrospective studies that found clinical events occurred in 5-10% of T1abN0 patients who did not receive chemotherapy or HER2-targeted treatment. Events were also observed in treated patients, indicating a need to optimize treatment. Major prognostic factors identified were age, proliferation rate, triple-negative status, and HER2 positivity. The document compares treatment guidelines from NCCN, Saint-Gallen, and French experts, noting differences in their approaches and a need for a more individualized molecular-based strategy.
Answer the following. (5 pts ea)A study is conducted to estimate.docxboyfieldhouse
Answer the following. (5 pts ea)
A study is conducted to estimate survival in patients following kidney transplant. Key factors that adversely affect success of the transplant include advanced age and diabetes. This study involves 25 participants who are 65 years of age and older and all have diabetes. Following transplant, each participant is followed for up to 10 years. The following are times to death, in years, or the time to last contact (at which time the participant was known to be alive).
Deaths: 1.2, 2.5, 4.3, 5.6, 6.7, 7.3 and 8.1 years
Alive: 3.4, 4.1, 4.2, 5.7, 5.9, 6.3, 6.4, 6.5, 7.3, 8.2, 8.6, 8.9, 9.4, 9.5, 10, 10, 10, and 10 years
Use the life table approach to estimate the survival function. Use years intervals of
0-2; 2-4;
Complete the table below.
Interval
in
Years
Number At Risk During Interval,
N
t
Average Number At Risk During Interval,
N
t*
=Nt-C
t
/2
Number of Deaths During Interval,
D
t
Lost to Follow-Up,
C
t
Proportion Dying
q
t
= D
t
/N
t*
Proportion Surviving
pt = 1-qt
Survival Probability
S
t
= pt*S
t-1
0-2
2-4
4-6
6-8
8-10
1.
cont.
Use the Kaplan-Meier approach to estimate the survival function.
Complete the table below
Time, Years
Number at Risk
Nt
Number of Deaths
Dt
Number Censored
Ct
Survival Probability
S
t+1
= S
t
*((N
t
-D
t
)/N
t
)
0
25
1.2
2.5
3.4
4.1
4.2
4.3
5.6
5.7
5.9
6.3
6.4
6.5
6.7
7.3
8.1
8.2
8.6
8.9
9.4
9.5
10.0
1.
cont.
Referring to the graph above –
What is the probability of surviving 6.5 years?
A.
None
B.
0.85
C.
0.60
D.
0.90
Patients have an 85% chance of surviving how many years?
A.
6.0
B.
4.25
C.
3.2
D.
5.5
2.
An observational cohort study is conducted to compare time to early failure in patients undergoing joint replacement surgery. Of specific interest is whether there is a difference in time to early failure between patients who are considered obese versus those who are not. The study is run for 40 weeks and times to early joint failure, measured in weeks, are shown below for participants classified as obese or not at the time of surgery.
Obese
Not Obese
Failure
No Failure
Failure
No Failure
28
39
27
37
25
41
31
36
31
37
34
39
32
35
40
38
36
36
32
29
39
41
Estimate the survival functions (time to early joint failure) for each group using the Kaplan-Meier approach.
Complete the table below.
Obese
Time, Weeks
Number at Risk
Nt
Number of Events (Joint Failures)
Dt
Number Censored
Ct
Survival Probability
S
t+1
= S
t
*((N
t
-D
t
)/N
t
)
0
11
25
28
29
31
32
35
36
37
38
39
41
2.
cont.
Non-Obese
Complete the table below.
Time, Weeks
Number at Risk
Nt
Number of Events (Joint Failures)
Dt
Number Censored
Ct
Survival Probability
S
t+1
= S
t
*((N
t
-D
t
)/N
t
)
0
11
27
31
32
34
36
37
39
40
41
To answer t.
This document provides an overview of survival analysis concepts and methods. It defines time-to-event data and censoring, and describes how to calculate a Kaplan-Meier survival curve from censored data. It also discusses log-rank tests to compare survival curves between groups and the Cox proportional hazards regression model for assessing the effects of multiple covariates on survival.
The document discusses survival analysis and Cox regression for cancer clinical trials. It begins with an overview of clinical trials for cancer, noting their complexity, long duration, high costs, and ethical concerns. It then covers survival analysis, describing key concepts like survival curves, hazard functions, and the Kaplan-Meier method for estimating survival when there is censoring. The document provides an example of survival data from a cancer study and discusses assumptions and parameters used in survival analysis like median and mean survival times.
1. The document discusses hypothesis testing, including defining the null and alternative hypotheses, types of errors, test statistics, and testing differences between population means and differences between two samples.
2. Examples are provided to demonstrate hypothesis testing for one and two sample means. This includes stating the hypotheses, significance level, test statistic, critical region, and conclusion.
3. Assignments are given applying hypothesis testing to compare lung destruction between smokers and non-smokers, serum complement activity between disease and normal subjects, and podiatric problems between elderly diabetic and non-diabetic patients.
This document outlines a study that aims to evaluate long-term outcomes of periodic cancer screening, including the inference of overdiagnosis. It describes using a probability model and simulation based on data from the HIP study to derive the probability of each long-term outcome: symptom-free life, no early detection, true early detection, and overdiagnosis. The key outcomes are defined and equations are provided to calculate the probability of each outcome based on factors like screening sensitivity, sojourn time in preclinical and clinical states, and a person's lifetime as a random variable. The methodology aims to investigate the chance of overdiagnosis from continued screening and evaluate long-term effects for an entire screened cohort.
basic lecture on literature types, importance of primary literature (papers,article) , study designs, and organization of scientific paper. p value and assessment of a new test is additional topic.
Ana Marusic - MedicReS World Congress 2011MedicReS
Four clinical trials (Trials A-D) tested active treatments against placebo for about 5 years. Trial A reported survival rates, Trial B reported risk reduction, Trial C reported mortality reduction, and Trial D reported number needed to treat. Clinicians considered Trials B and D most useful for practice based on how the results were reported. Reporting guidelines recommend presenting numbers of events, absolute risk reductions, relative risks with confidence intervals, and number needed to treat to improve interpretation and clinical applicability of trial results. Adopting reporting standards can enhance transparency and reliability of research literature.
The document discusses adjuvant chemotherapy for early-stage (T1ab) breast cancer in 2014. It summarizes several retrospective studies that found clinical events occurred in 5-10% of T1abN0 patients who did not receive chemotherapy or HER2-targeted treatment. Events were also observed in treated patients, indicating a need to optimize treatment. Major prognostic factors identified were age, proliferation rate, triple-negative status, and HER2 positivity. The document compares treatment guidelines from NCCN, Saint-Gallen, and French experts, noting differences in their approaches and a need for a more individualized molecular-based strategy.
Answer the following. (5 pts ea)A study is conducted to estimate.docxboyfieldhouse
Answer the following. (5 pts ea)
A study is conducted to estimate survival in patients following kidney transplant. Key factors that adversely affect success of the transplant include advanced age and diabetes. This study involves 25 participants who are 65 years of age and older and all have diabetes. Following transplant, each participant is followed for up to 10 years. The following are times to death, in years, or the time to last contact (at which time the participant was known to be alive).
Deaths: 1.2, 2.5, 4.3, 5.6, 6.7, 7.3 and 8.1 years
Alive: 3.4, 4.1, 4.2, 5.7, 5.9, 6.3, 6.4, 6.5, 7.3, 8.2, 8.6, 8.9, 9.4, 9.5, 10, 10, 10, and 10 years
Use the life table approach to estimate the survival function. Use years intervals of
0-2; 2-4;
Complete the table below.
Interval
in
Years
Number At Risk During Interval,
N
t
Average Number At Risk During Interval,
N
t*
=Nt-C
t
/2
Number of Deaths During Interval,
D
t
Lost to Follow-Up,
C
t
Proportion Dying
q
t
= D
t
/N
t*
Proportion Surviving
pt = 1-qt
Survival Probability
S
t
= pt*S
t-1
0-2
2-4
4-6
6-8
8-10
1.
cont.
Use the Kaplan-Meier approach to estimate the survival function.
Complete the table below
Time, Years
Number at Risk
Nt
Number of Deaths
Dt
Number Censored
Ct
Survival Probability
S
t+1
= S
t
*((N
t
-D
t
)/N
t
)
0
25
1.2
2.5
3.4
4.1
4.2
4.3
5.6
5.7
5.9
6.3
6.4
6.5
6.7
7.3
8.1
8.2
8.6
8.9
9.4
9.5
10.0
1.
cont.
Referring to the graph above –
What is the probability of surviving 6.5 years?
A.
None
B.
0.85
C.
0.60
D.
0.90
Patients have an 85% chance of surviving how many years?
A.
6.0
B.
4.25
C.
3.2
D.
5.5
2.
An observational cohort study is conducted to compare time to early failure in patients undergoing joint replacement surgery. Of specific interest is whether there is a difference in time to early failure between patients who are considered obese versus those who are not. The study is run for 40 weeks and times to early joint failure, measured in weeks, are shown below for participants classified as obese or not at the time of surgery.
Obese
Not Obese
Failure
No Failure
Failure
No Failure
28
39
27
37
25
41
31
36
31
37
34
39
32
35
40
38
36
36
32
29
39
41
Estimate the survival functions (time to early joint failure) for each group using the Kaplan-Meier approach.
Complete the table below.
Obese
Time, Weeks
Number at Risk
Nt
Number of Events (Joint Failures)
Dt
Number Censored
Ct
Survival Probability
S
t+1
= S
t
*((N
t
-D
t
)/N
t
)
0
11
25
28
29
31
32
35
36
37
38
39
41
2.
cont.
Non-Obese
Complete the table below.
Time, Weeks
Number at Risk
Nt
Number of Events (Joint Failures)
Dt
Number Censored
Ct
Survival Probability
S
t+1
= S
t
*((N
t
-D
t
)/N
t
)
0
11
27
31
32
34
36
37
39
40
41
To answer t.
This document provides an overview of survival analysis concepts and methods. It defines time-to-event data and censoring, and describes how to calculate a Kaplan-Meier survival curve from censored data. It also discusses log-rank tests to compare survival curves between groups and the Cox proportional hazards regression model for assessing the effects of multiple covariates on survival.
The document discusses survival analysis and Cox regression for cancer clinical trials. It begins with an overview of clinical trials for cancer, noting their complexity, long duration, high costs, and ethical concerns. It then covers survival analysis, describing key concepts like survival curves, hazard functions, and the Kaplan-Meier method for estimating survival when there is censoring. The document provides an example of survival data from a cancer study and discusses assumptions and parameters used in survival analysis like median and mean survival times.
1. The document discusses hypothesis testing, including defining the null and alternative hypotheses, types of errors, test statistics, and testing differences between population means and differences between two samples.
2. Examples are provided to demonstrate hypothesis testing for one and two sample means. This includes stating the hypotheses, significance level, test statistic, critical region, and conclusion.
3. Assignments are given applying hypothesis testing to compare lung destruction between smokers and non-smokers, serum complement activity between disease and normal subjects, and podiatric problems between elderly diabetic and non-diabetic patients.
The document provides an overview of survival analysis. It defines survival analysis as a branch of statistics that focuses on time-to-event data and their analysis. It discusses censored and truncated data, the life table method, the Kaplan-Meier estimator for estimating survival functions when there is censoring, and the Cox regression model for assessing relationships between covariates and survival times. The key aspects of survival analysis are estimating the probability of surviving past a certain time point and comparing survival distributions between groups while accounting for censored observations.
The PARTNER trial studied 358 inoperable patients with severe aortic stenosis who were randomly assigned to either transfemoral aortic valve implantation (TAVI) or standard therapy. At 1 year, all-cause mortality was significantly lower in the TAVI group compared to standard therapy (30.7% vs 50.7%, p<0.0001). TAVI also improved cardiac symptoms and walking distance. While TAVI was associated with more complications initially, serial echocardiograms found reduced gradients and stable valve function over 1 year. The study demonstrated TAVI should be the new standard of care for inoperable aortic stenosis patients.
The PARTNER trial studied 358 inoperable patients with severe aortic stenosis who were randomly assigned to either transfemoral aortic valve implantation (TAVI) or standard therapy. At 1 year, all-cause mortality was significantly lower in the TAVI group compared to standard therapy (30.7% vs 50.7%, p<0.0001). TAVI also improved cardiac symptoms and walking distance. While TAVI was associated with more complications initially, serial echocardiograms found reduced gradients and stable valve function over 1 year. The study demonstrated TAVI should be the new standard of care for inoperable aortic stenosis patients.
lecture1 on survival analysis HRP 262 classTroyTeo1
1. Survival analysis is a set of statistical methods used to analyze longitudinal data on the occurrence of events such as death, disease onset, or recovery. It can accommodate data from randomized clinical trials or cohort studies.
2. Key concepts in survival analysis include the survival function, which gives the probability of surviving past a particular time, and the hazard function, which provides the instantaneous risk of an event at a particular time given survival up to that time.
3. Common distributions used in parametric survival analysis to model event times include the exponential distribution, which assumes a constant hazard over time, and the Weibull distribution, which allows the hazard to increase or decrease over time.
Simulation Study for Extended AUC In Disease Risk Prediction in survival anal...Gang Cui
- The document describes methods for estimating the extended AUC and correlation coefficient (CORR) between a risk score (Z) and event time (T), given the event time is less than a time of interest (T0).
- Two methods are presented for estimating extended AUC: a counting method and a survival analysis method. A survival analysis method is also described for estimating CORR(Z,T|T<T0).
- The performance of the estimators is evaluated by comparing estimates from simulated data to true values, where the data generation process is known. Results suggest the estimators have low bias.
Sophie Taieb : Breast MRI in neoadjuvant chemotherapy : A predictive respons...breastcancerupdatecongress
This document discusses the use of breast MRI in evaluating response to neoadjuvant chemotherapy. MRI can provide both morphological and functional information about tumors. Studies show DCE-MRI and DWI-MRI may help assess response after 1-2 cycles of chemotherapy, with changes in tumor size, kinetic parameters and ADC values predicting pathological complete or near-complete response. Larger prospective trials are still needed to standardize MRI methods and thresholds to determine if changes on MRI could guide modifications to chemotherapy regimens for non-responders. Overall, MRI shows potential as a predictive marker and non-invasive method for monitoring early response to neoadjuvant breast cancer treatment.
Este manual es útil e indispensable para el uso del "Package TesSurvRec_1.2.1" de CRAN. Importante para estadístico, médicos, farmacéuticos, seguros, bancos, ingenieros, psicólogos, astrónomos, entre otras profesiones. Son pruebas estadísticas que se utilizan para medir diferencias entre funciones del análisis de supervivencias de grupos de poblaciones que manifiestan eventos recurrentes.
End to end standards driven oncology study (solid tumor, Immunotherapy, Leuke...Kevin Lee
Each therapeutic area has its own unique data collection and analysis. Oncology especially, has particularly specific standards for collection and analysis of data. Oncology studies are also separated into one of three different sub types according to response criteria guidelines. The first sub type, Solid Tumor study, usually follows RECIST (Response Evaluation Criteria in Solid Tumor). The second sub type, Lymphoma study, usually follows Cheson. Lastly, Leukemia study follows study specific guidelines (IWCLL for Chronic Lymphocytic Leukemia, IWAML for Acute Myeloid Leukemia, NCCN Guidelines for Acute Lymphoblastic Leukemia and ESMO clinical practice guides for Chronic Myeloid Leukemia).
This paper will demonstrate the notable level of sophistication implemented in CDISC standards, mainly driven by the differentiation across different response criteria. The paper will specifically show what SDTM domains are used to collect the different data points in each type. For example, Solid tumor studies collect tumor results in TR and TU and response in RS. Lymphoma studies collect not only tumor results and response, but also bone marrow assessment in LB and FA, and spleen and liver enlargement in PE. Leukemia studies collect blood counts (i.e., lymphocytes, neutrophils, hemoglobin and platelet count) in LB and genetic mutation as well as what are collected in Lymphoma studies. The paper will also introduce oncology terminologies (e.g., CR, PR, SD, PD, NE) and oncology-specific ADaM data sets - Time to Event (--TTE) data set.
Finally, the paper will show how standards (e.g., response criteria guidelines and CDISC) will streamline clinical trial artefacts development in oncology studies and how end to end clinical trial artefacts development can be accomplished through this standards-driven process.
The document discusses several examples of modeling and uncertainty quantification:
1) Weather and climate modeling involves coupling complex multi-physics models that contain uncertainties in inputs, numerical approximations, and sensor measurements. The goal is to assimilate data to quantify uncertain initial conditions and parameters and make predictions with quantified uncertainties.
2) Pressurized water reactor (PWR) modeling involves multi-scale, multi-physics models with large numbers of uncertain inputs and parameters. Quantifying these uncertainties and understanding their impact on important outputs like peak operating temperature and CRUD buildup is challenging.
3) HIV and epidemic models have many uncertain parameters that cannot be directly measured. Bayesian inference and MCMC sampling are used to quantify parameter uncertainties and make predictions with
Ph250b.14 measures of disease part 2 fri sep 5 2014 A M
This document outlines learning objectives and concepts related to measuring disease in epidemiology. It discusses different types of populations, concepts of disease occurrence over time, and key epidemiologic measures including prevalence, incidence, risk, rates, and methods for calculating cumulative incidence. Cumulative incidence can be calculated using simple, actuarial, Kaplan-Meier, or density methods, each with different assumptions about follow-up time and censoring. The relationships between prevalence, incidence, and risk/rates are also reviewed.
Circulating tumor cells (CTCs) and circulating tumor cells in cerebrospinal fluid (CSFTCs) show promise as biomarkers in metastatic lung cancer. The document discusses various approaches to detecting CTCs/CSFTCs, clinical research on CTCs in lung cancer, comparisons of CTC detection methods, and preliminary results on detecting CSFTCs in breast, lung, and melanoma cancers. Detection of CSFTCs may allow evaluation of treatment efficacy and provide insights into metastatic properties by studying a more homogeneous cell population compared to CTCs in blood.
This document provides an overview of clinical trials for scleroderma (systemic sclerosis). It discusses the Royal Free Hospital scleroderma cohort and complications seen. Skin scoring methods and trajectories predicting outcomes are presented. Past and current immunomodulatory strategies and trials are reviewed, including methotrexate, mycophenolate, stem cell transplant, and rituximab. Ongoing and future trials targeting biological mechanisms are summarized, such as nintedanib, lenabasum, lanifibranor, and riociguat. Lessons from past trials and challenges for the future are discussed.
Trial plan with capitation payment of the national healthcare insurance in ta...Shu-Jeng Hsieh
It is a research I'm also involved as a graduate student. It is submitted and accepted by PACIS 2015 (Pacific Asia Conference on Information Systems) and I am the presenter of the research in the conference.
Clinical data based optimal STI strategies for HIV: a reinforcement learning ...Université de Liège (ULg)
This document summarizes a presentation on using reinforcement learning to determine optimal structured treatment interruption (STI) strategies for HIV patients based on clinical data. It discusses how clinical data from patients on drug regimens can be viewed as trajectories and processed using reinforcement learning techniques to infer STI policies without requiring an explicit model of HIV dynamics. The approach formulates STI optimization as a reinforcement learning problem to compute policies directly from sample trajectories that minimize costs like side effects and keep the virus under control.
Projecting ‘time to event’ outcomes in technology assessment: an alternative ...cheweb1
This document discusses alternative methods for projecting survival outcomes in technology assessments beyond what is observed in clinical trials.
The standard method of fitting parametric survival functions to trial data and extrapolating is problematic as it assumes a single mechanism and does not account for trial design or changes in risk over time. LRiG proposes examining trial data to understand risk trajectories and formulating hypotheses based on clinical context rather than selecting a model solely on fit. A case study demonstrates modeling progression-free survival, post-progression survival, and overall survival as separate phases using exponential convolution functions. LRiG advocates understanding empirical data and developing more informative multi-phase models rather than relying on standard projections.
CDISC journey in solid tumor using recist 1.1 (Paper)Kevin Lee
This document summarizes the Response Evaluation Criteria in Solid Tumors (RECIST) version 1.1 guidelines for evaluating tumor response in clinical trials. It introduces the three types of oncology studies, defines target and non-target lesions, and describes how lesion measurements are used to determine complete response, partial response, stable disease, progression disease, and not evaluable responses. It also discusses how RECIST 1.1 data are organized in CDISC SDTM and ADaM domains, and provides an example of how these domains can be used to evaluate tumor response and analyze outcomes like objective response rate, progression-free survival, and time to progression.
Rectal dose constraints for salvage iodine-125 prostate brachytherapy.Max Peters
1) Focal salvage iodine-125 brachytherapy (FS I-125 BT) for recurrent prostate cancer aims to target only the recurrent lesion, reducing rectal radiation dose compared to total salvage I-125 BT (TS) which treats the entire prostate.
2) The study analyzed rectal dosimetry for 20 FS and 28 TS patients, finding significantly lower rectal radiation doses for FS patients. Rectal D0.1cc, D1cc, D2cc and V100 were 38-46 Gy lower for FS.
3) For TS patients, rectal dose constraints of D0.1cc ≤ 160 Gy, D1cc ≤ 119 Gy, D2
This document discusses survival analysis and Cox regression for cancer clinical trials. It begins with an introduction to Cox regression analysis and how it can be used to analyze the effects of covariates on survival rates in cancer trials. The document then provides examples of Cox regression outputs and how to interpret the results, including checking the proportional hazards assumption. It cautions against some invalid methods of survival analysis that do not properly account for censored or time-dependent data.
This document discusses the approach to peripheral lung nodules (PLNs). It begins by outlining low-dose CT scanning protocols and radiation doses. It then summarizes data from the National Lung Screening Trial showing a 20% reduction in mortality from lung cancer screening. Principles for screening Asian populations are discussed due to differences from Western populations. Guidelines for evaluating solid and subsolid nodules on imaging are presented. Techniques for bronchoscopic biopsy like navigation bronchoscopy are described and compared to transthoracic needle biopsy. Real-time localization is emphasized to optimize bronchoscopic yield. A case example illustrates these principles. Local data showing high diagnostic yield from bronchoscopic biopsy with navigation is also presented.
The document provides an overview of survival analysis. It defines survival analysis as a branch of statistics that focuses on time-to-event data and their analysis. It discusses censored and truncated data, the life table method, the Kaplan-Meier estimator for estimating survival functions when there is censoring, and the Cox regression model for assessing relationships between covariates and survival times. The key aspects of survival analysis are estimating the probability of surviving past a certain time point and comparing survival distributions between groups while accounting for censored observations.
The PARTNER trial studied 358 inoperable patients with severe aortic stenosis who were randomly assigned to either transfemoral aortic valve implantation (TAVI) or standard therapy. At 1 year, all-cause mortality was significantly lower in the TAVI group compared to standard therapy (30.7% vs 50.7%, p<0.0001). TAVI also improved cardiac symptoms and walking distance. While TAVI was associated with more complications initially, serial echocardiograms found reduced gradients and stable valve function over 1 year. The study demonstrated TAVI should be the new standard of care for inoperable aortic stenosis patients.
The PARTNER trial studied 358 inoperable patients with severe aortic stenosis who were randomly assigned to either transfemoral aortic valve implantation (TAVI) or standard therapy. At 1 year, all-cause mortality was significantly lower in the TAVI group compared to standard therapy (30.7% vs 50.7%, p<0.0001). TAVI also improved cardiac symptoms and walking distance. While TAVI was associated with more complications initially, serial echocardiograms found reduced gradients and stable valve function over 1 year. The study demonstrated TAVI should be the new standard of care for inoperable aortic stenosis patients.
lecture1 on survival analysis HRP 262 classTroyTeo1
1. Survival analysis is a set of statistical methods used to analyze longitudinal data on the occurrence of events such as death, disease onset, or recovery. It can accommodate data from randomized clinical trials or cohort studies.
2. Key concepts in survival analysis include the survival function, which gives the probability of surviving past a particular time, and the hazard function, which provides the instantaneous risk of an event at a particular time given survival up to that time.
3. Common distributions used in parametric survival analysis to model event times include the exponential distribution, which assumes a constant hazard over time, and the Weibull distribution, which allows the hazard to increase or decrease over time.
Simulation Study for Extended AUC In Disease Risk Prediction in survival anal...Gang Cui
- The document describes methods for estimating the extended AUC and correlation coefficient (CORR) between a risk score (Z) and event time (T), given the event time is less than a time of interest (T0).
- Two methods are presented for estimating extended AUC: a counting method and a survival analysis method. A survival analysis method is also described for estimating CORR(Z,T|T<T0).
- The performance of the estimators is evaluated by comparing estimates from simulated data to true values, where the data generation process is known. Results suggest the estimators have low bias.
Sophie Taieb : Breast MRI in neoadjuvant chemotherapy : A predictive respons...breastcancerupdatecongress
This document discusses the use of breast MRI in evaluating response to neoadjuvant chemotherapy. MRI can provide both morphological and functional information about tumors. Studies show DCE-MRI and DWI-MRI may help assess response after 1-2 cycles of chemotherapy, with changes in tumor size, kinetic parameters and ADC values predicting pathological complete or near-complete response. Larger prospective trials are still needed to standardize MRI methods and thresholds to determine if changes on MRI could guide modifications to chemotherapy regimens for non-responders. Overall, MRI shows potential as a predictive marker and non-invasive method for monitoring early response to neoadjuvant breast cancer treatment.
Este manual es útil e indispensable para el uso del "Package TesSurvRec_1.2.1" de CRAN. Importante para estadístico, médicos, farmacéuticos, seguros, bancos, ingenieros, psicólogos, astrónomos, entre otras profesiones. Son pruebas estadísticas que se utilizan para medir diferencias entre funciones del análisis de supervivencias de grupos de poblaciones que manifiestan eventos recurrentes.
End to end standards driven oncology study (solid tumor, Immunotherapy, Leuke...Kevin Lee
Each therapeutic area has its own unique data collection and analysis. Oncology especially, has particularly specific standards for collection and analysis of data. Oncology studies are also separated into one of three different sub types according to response criteria guidelines. The first sub type, Solid Tumor study, usually follows RECIST (Response Evaluation Criteria in Solid Tumor). The second sub type, Lymphoma study, usually follows Cheson. Lastly, Leukemia study follows study specific guidelines (IWCLL for Chronic Lymphocytic Leukemia, IWAML for Acute Myeloid Leukemia, NCCN Guidelines for Acute Lymphoblastic Leukemia and ESMO clinical practice guides for Chronic Myeloid Leukemia).
This paper will demonstrate the notable level of sophistication implemented in CDISC standards, mainly driven by the differentiation across different response criteria. The paper will specifically show what SDTM domains are used to collect the different data points in each type. For example, Solid tumor studies collect tumor results in TR and TU and response in RS. Lymphoma studies collect not only tumor results and response, but also bone marrow assessment in LB and FA, and spleen and liver enlargement in PE. Leukemia studies collect blood counts (i.e., lymphocytes, neutrophils, hemoglobin and platelet count) in LB and genetic mutation as well as what are collected in Lymphoma studies. The paper will also introduce oncology terminologies (e.g., CR, PR, SD, PD, NE) and oncology-specific ADaM data sets - Time to Event (--TTE) data set.
Finally, the paper will show how standards (e.g., response criteria guidelines and CDISC) will streamline clinical trial artefacts development in oncology studies and how end to end clinical trial artefacts development can be accomplished through this standards-driven process.
The document discusses several examples of modeling and uncertainty quantification:
1) Weather and climate modeling involves coupling complex multi-physics models that contain uncertainties in inputs, numerical approximations, and sensor measurements. The goal is to assimilate data to quantify uncertain initial conditions and parameters and make predictions with quantified uncertainties.
2) Pressurized water reactor (PWR) modeling involves multi-scale, multi-physics models with large numbers of uncertain inputs and parameters. Quantifying these uncertainties and understanding their impact on important outputs like peak operating temperature and CRUD buildup is challenging.
3) HIV and epidemic models have many uncertain parameters that cannot be directly measured. Bayesian inference and MCMC sampling are used to quantify parameter uncertainties and make predictions with
Ph250b.14 measures of disease part 2 fri sep 5 2014 A M
This document outlines learning objectives and concepts related to measuring disease in epidemiology. It discusses different types of populations, concepts of disease occurrence over time, and key epidemiologic measures including prevalence, incidence, risk, rates, and methods for calculating cumulative incidence. Cumulative incidence can be calculated using simple, actuarial, Kaplan-Meier, or density methods, each with different assumptions about follow-up time and censoring. The relationships between prevalence, incidence, and risk/rates are also reviewed.
Circulating tumor cells (CTCs) and circulating tumor cells in cerebrospinal fluid (CSFTCs) show promise as biomarkers in metastatic lung cancer. The document discusses various approaches to detecting CTCs/CSFTCs, clinical research on CTCs in lung cancer, comparisons of CTC detection methods, and preliminary results on detecting CSFTCs in breast, lung, and melanoma cancers. Detection of CSFTCs may allow evaluation of treatment efficacy and provide insights into metastatic properties by studying a more homogeneous cell population compared to CTCs in blood.
This document provides an overview of clinical trials for scleroderma (systemic sclerosis). It discusses the Royal Free Hospital scleroderma cohort and complications seen. Skin scoring methods and trajectories predicting outcomes are presented. Past and current immunomodulatory strategies and trials are reviewed, including methotrexate, mycophenolate, stem cell transplant, and rituximab. Ongoing and future trials targeting biological mechanisms are summarized, such as nintedanib, lenabasum, lanifibranor, and riociguat. Lessons from past trials and challenges for the future are discussed.
Trial plan with capitation payment of the national healthcare insurance in ta...Shu-Jeng Hsieh
It is a research I'm also involved as a graduate student. It is submitted and accepted by PACIS 2015 (Pacific Asia Conference on Information Systems) and I am the presenter of the research in the conference.
Clinical data based optimal STI strategies for HIV: a reinforcement learning ...Université de Liège (ULg)
This document summarizes a presentation on using reinforcement learning to determine optimal structured treatment interruption (STI) strategies for HIV patients based on clinical data. It discusses how clinical data from patients on drug regimens can be viewed as trajectories and processed using reinforcement learning techniques to infer STI policies without requiring an explicit model of HIV dynamics. The approach formulates STI optimization as a reinforcement learning problem to compute policies directly from sample trajectories that minimize costs like side effects and keep the virus under control.
Projecting ‘time to event’ outcomes in technology assessment: an alternative ...cheweb1
This document discusses alternative methods for projecting survival outcomes in technology assessments beyond what is observed in clinical trials.
The standard method of fitting parametric survival functions to trial data and extrapolating is problematic as it assumes a single mechanism and does not account for trial design or changes in risk over time. LRiG proposes examining trial data to understand risk trajectories and formulating hypotheses based on clinical context rather than selecting a model solely on fit. A case study demonstrates modeling progression-free survival, post-progression survival, and overall survival as separate phases using exponential convolution functions. LRiG advocates understanding empirical data and developing more informative multi-phase models rather than relying on standard projections.
CDISC journey in solid tumor using recist 1.1 (Paper)Kevin Lee
This document summarizes the Response Evaluation Criteria in Solid Tumors (RECIST) version 1.1 guidelines for evaluating tumor response in clinical trials. It introduces the three types of oncology studies, defines target and non-target lesions, and describes how lesion measurements are used to determine complete response, partial response, stable disease, progression disease, and not evaluable responses. It also discusses how RECIST 1.1 data are organized in CDISC SDTM and ADaM domains, and provides an example of how these domains can be used to evaluate tumor response and analyze outcomes like objective response rate, progression-free survival, and time to progression.
Rectal dose constraints for salvage iodine-125 prostate brachytherapy.Max Peters
1) Focal salvage iodine-125 brachytherapy (FS I-125 BT) for recurrent prostate cancer aims to target only the recurrent lesion, reducing rectal radiation dose compared to total salvage I-125 BT (TS) which treats the entire prostate.
2) The study analyzed rectal dosimetry for 20 FS and 28 TS patients, finding significantly lower rectal radiation doses for FS patients. Rectal D0.1cc, D1cc, D2cc and V100 were 38-46 Gy lower for FS.
3) For TS patients, rectal dose constraints of D0.1cc ≤ 160 Gy, D1cc ≤ 119 Gy, D2
This document discusses survival analysis and Cox regression for cancer clinical trials. It begins with an introduction to Cox regression analysis and how it can be used to analyze the effects of covariates on survival rates in cancer trials. The document then provides examples of Cox regression outputs and how to interpret the results, including checking the proportional hazards assumption. It cautions against some invalid methods of survival analysis that do not properly account for censored or time-dependent data.
This document discusses the approach to peripheral lung nodules (PLNs). It begins by outlining low-dose CT scanning protocols and radiation doses. It then summarizes data from the National Lung Screening Trial showing a 20% reduction in mortality from lung cancer screening. Principles for screening Asian populations are discussed due to differences from Western populations. Guidelines for evaluating solid and subsolid nodules on imaging are presented. Techniques for bronchoscopic biopsy like navigation bronchoscopy are described and compared to transthoracic needle biopsy. Real-time localization is emphasized to optimize bronchoscopic yield. A case example illustrates these principles. Local data showing high diagnostic yield from bronchoscopic biopsy with navigation is also presented.
1. Motivation and Goal
NLST Data and Probability Model
Application
Results and Summary
Long term effects and over diagnosis of CT scan
in lung cancer.
Dongfeng Wu
Department of Bioinformatics and Biostatistics
School of Public Health and Information Sciences
University of Louisville
JSM, August 12, 2015
Wu screening effects
2. Motivation and Goal
NLST Data and Probability Model
Application
Results and Summary
Background and Motivation
Hot debate on over-diagnosis, the diagnosis of cancer that never
would have become symptoms in a person’s lifetime.
We have developed a probability model for evaluating long term
effects and over diagnosis for initially healthy people without any
screening history (Wu et al, Statistica Sinica 2014).
Old people may have gone through some screening exam before and
seems healthy so far. How to extend the original model to people
with a screening history?
We will investigate whether continued cancer screening for people at
risk cause greater chance of over-diagnosis, and to evaluate
long-term outcomes of regular screening for the whole cohort.
Methods will be applied to the computed tomography (CT) arm in
the National Lung Screening Trial (NLST) data.
Wu screening effects
3. Motivation and Goal
NLST Data and Probability Model
Application
Results and Summary
How to Evaluate Long-term Effects?
People in periodic screening were categorized into 4 mutually exclusive
groups: Symptom-free-life, No-early-detection, True-early-detection, and
Over-diagnosis, based on diagnosis status and ultimate lifetime disease
status.
Table 1. Definition of outcomes/events in screening
ultimate lifetime disease status
diagnosis status
no symptom before death having symptom before death
not-screen-detected Symptom-free-life No-early-detection
screen-detected Over-diagnosis True-early-detection
All initially superficially healthy participants in a screening program will
fall into one of the above 4 groups.
Wu screening effects
4. Motivation and Goal
NLST Data and Probability Model
Application
Results and Summary
Definition of Groups/Outcomes in Screening
Group 1 (Symptom-free-life(SympF)): A man who took part in
screening exams, no lung cancer was ever diagnosed, and untimately he
died of other causes.
Group 2 (No-early-detection(NoED)): A man who took part in
screening exams, lung cancer manifested itself clinically and was not
detected by scheduled screening exams.
Group 3 (True-early-detection(TrueED)): A man whose lung cancer was
diagnosed at a scheduled screening exam and his clinical symptom
would have appeared before death.
Group 4 (Over-Diagnosis(OverD)): A man who was diagnosed with lung
cancer at a scheduled screening exam BUT his clinical symptoms
would NOT have appeared before his death.
Probability of each group with no screening history has been derived and
estimated (Wu et al, Statistica Sinica 2014). Now we extend that work to older
people with a history.
Wu screening effects
5. Motivation and Goal
NLST Data and Probability Model
Application
Results and Summary
The Model
The Progressive disease model:
S0
disease free
Sp
t1
preclinical
Sc
t2
clinical
t
6
- -
sojourn time: t2 − t1.
lead time: t2 − t.
sensitivity: β = P(X = 1|D = 1).
Wu screening effects
6. Motivation and Goal
NLST Data and Probability Model
Application
Results and Summary
The NLST Study
About 54,000 Male and Female heavy smokers were enrolled
between 08/2002-04/2004. Data collection finished by
12/2009.
They were randomized to 2 arms: chest X-ray or low-dose
spiral CT.
Each arm underwent 3 annual screenings; more tumor cases
were diagnosed in the CT arm than that in the chest X-ray.
Initial screening age 55−74.
Wu screening effects
7. Motivation and Goal
NLST Data and Probability Model
Application
Results and Summary
Table 2: The NLST Data - Overview
Group within Study atotal subj. bScreen-diag. No. cInterval No.
The NLST: Chest X-ray
Overall 26226 279 177
male smokers 15500 165 107
female smokers 10726 114 70
The NLST: Spiral CT
Overall 26452 649 60
male smokers 15621 384 44
female smokers 10831 265 16
a Total number of people who ever received chest X-ray for lung cancer.
b Total number of subjects diagnosed by regular screening.
c Total number of clinical incident cases between two regular screenings.
Wu screening effects
8. Motivation and Goal
NLST Data and Probability Model
Application
Results and Summary
Definition
Let t0 < t1 < · · · < tk−1 < tk: k ordered screening exam
times.
ni : the number of individuals examined at ti−1
si : screening detected cases at the exam given at ti−1
ri : interval cases, the number of cases found in the clinical
state (Sc) within (ti−1, ti ).
(ni , si , ri ): data stratified by initial age in the i-th interval.
Wu screening effects
9. Motivation and Goal
NLST Data and Probability Model
Application
Results and Summary
Table 3: The NLST - CT group data
Age n1 s1 r1 n2 s2 r2 n3 s3 r3
· · · · · ·
60 1946 16 3 1847 13 1 1797 17 0
61 1786 18 0 1678 14 1 1659 11 3
62 1548 11 1 1452 8 2 1408 12 0
63 1427 14 1 1350 6 2 1320 11 0
64 1352 17 0 1287 18 72 1240 11 3
· · · · · ·
Wu screening effects
10. Motivation and Goal
NLST Data and Probability Model
Application
Results and Summary
The Probability Model
βi = β(ti ): sensitivity at age ti .
w(t)dt: transition probability from S0 → Sp at age (t, t + dt)
q(z): pdf of sojourn time S = Sc − Sp.
Q(z) = P(S > z) =
∞
z q(x)dx: survivor function of the
sojourn time.
A person’s life time T ∼ fT (t).
ith generation: people enter Sp during ith interval (ti−1, ti ),
i = 0, · · · , k, t−1 ≡ 0.
Wu screening effects
11. Motivation and Goal
NLST Data and Probability Model
Application
Results and Summary
Big picture: How to derive the probability?
0 t0
History ¤
t1 · · · tK1
↑
Current age
tK1+1
Future§
· · · tK1+K−1 T = tK1+K
Let K1 = scr. number in the past, K = scr. number in the future. Define
HK1 =
A man who had screening exams at his ages t0 < t1 < · · · < tK1−1,
no lung cancer has been diagnosed,
and he is asymptomatic at her current age tK1
.
Our Plan:
(a) Derive the probability of each case when K1 = 1, K = 1 when lifetime T is
fixed.
(b) Derive the probability of each case for any screening number K1 and K,
when lifetime T is fixed.
(c) Let T ∼ fT (t) and the future screening number K will be a random
variable!
Wu screening effects
12. Motivation and Goal
NLST Data and Probability Model
Application
Results and Summary
When K1 = K = 1:
0 t0
History ¤
t1
↑
Current age
Future§
T = t
Assume an asymptomatic man at current age t1, underwent one exam at
t0(< t1), and his life time T = t(> t1). Define event
H1 =
A man who had a screening exam at age t0, no lung cancer has been found,
and he is asymptomatic at current age t1
.
Then
P(H1|T ≥ t1) = 1 −
t1
0
w(x)dx + (1 − β0)
t0
0
w(x)Q(t1 − x)dx
+
t1
t0
w(x)Q(t1 − x)dx.
Wu screening effects
13. Motivation and Goal
NLST Data and Probability Model
Application
Results and Summary
Symptom-free-life and No-early-detection:K1 = K = 1
Given his lifetime T = t > t1,
P(Case 1: SympF, H1|T = t)
= 1 −
t
0
w(x)dx + (1 − β1)(1 − β0)
t0
0
w(x)Q(t − x)dx
+ (1 − β1)
t1
t0
w(x)Q(t − x)dx +
t
t1
w(x)Q(t − x)dx.
P(Case 2: NoED, H1|T = t)
= (1 − β1)(1 − β0)
t0
0
w(x)[Q(t1 − x) − Q(t − x)]dx
+ (1 − β1)
t1
t0
w(x)[Q(t1 − x) − Q(t − x)]dx
+
t
t1
w(x)[1 − Q(t − x)]dx.
Wu screening effects
14. Motivation and Goal
NLST Data and Probability Model
Application
Results and Summary
True-early-detection and Over-diagnosis:K1 = K = 1
P(Case 3: TrueED, H1|T = t)
= β1(1 − β0)
t0
0
w(x)[Q(t1 − x) − Q(t − x)]dx
+ β1
t1
t0
w(x)[Q(t1 − x) − Q(t − x)]dx
P(Case 4: OverD, H1|T = t)
= β1(1 − β0)
t0
0
w(x)Q(t − x)dx + β1
t1
t0
w(x)Q(t − x)dx.
If human lifetime T ∼ fT (t), then
P(Case i, H1|T ≥ t1) =
∞
t1
P(Case i, H1|T = t)fT (t|T ≥ t1)dt, i = 1, 2, 3, 4.
where the conditional pdf fT (t|T > t1) = fT (t)/P(T > t1), if t > t1.
Wu screening effects
15. Motivation and Goal
NLST Data and Probability Model
Application
Results and Summary
Outcomes Evaluation: K1 = K = 1
We can prove that:
4
i=1
P(Case i, H1|T = t) = P(H1|T ≥ t1).
Since the right hand side of above does not depend on t, we have
4
i=1
P(Case i, H1|T ≥ t1) =
∞
t1
[
4
i=1
P(Case i, H1|T = t)]fT (t|T ≥ t1)dt
= P(H1|T ≥ t1).
This implies
4
i=1
P(Case i|H1, T ≥ t1) =
4
i=1
P(Case i, H1|T ≥ t1)
P(H1|T ≥ t1)
= 1.
Wu screening effects
16. Motivation and Goal
NLST Data and Probability Model
Application
Results and Summary
For Any Fixed Scr. K1 and Future Scr. K
Define
HK1
=
A man who had screening exams at ages t0 < t1 < · · · < tK1−1,
no lung cancer has been diagnosed,
and he is asymptomatic at his current age tK1
.
Then
P(HK1
|T ≥ tK1
) = 1 −
tK1
0
w(x)dx
+
K1−1
j=0
(1 − βj ) · · · (1 − βK1−1)
tj
tj−1
w(x)Q(tK1
− x)dx
+
tK1
tK1−1
w(x)Q(tK1 − x)dx.
Wu screening effects
19. Motivation and Goal
NLST Data and Probability Model
Application
Results and Summary
When Lifetime T is a Random Variable
We can prove that for any scr. number K1 ≥ 1 and K ≥ 1:
4
i=1
P(Case i, HK1
|T = tK1+K ) = P(HK1
|T ≥ tK1
) (1)
For an asymptomatic man at his current age tK1 , his life time is not a
fixed value, but a random variable. If his future screening schedule is
tK1 < tK1+1 < . . . , then the number of screening exams is also a r.v.
K = K(T) = n, if tK1+n−1 < T < tK1+n.
P(Case i, HK1
|T ≥ tK1
) =
∞
tK1
P(Case i, HK1
|K = K(T), T = t)fT (t|T ≥ tK1
)dt,
And
4
i=1
P(Case i|HK1
, T ≥ tK1
) = 1.
Wu screening effects
20. Motivation and Goal
NLST Data and Probability Model
Application
Results and Summary
Application to the NLST-CT Data
Sensitivity: β(t) = Prob(Screen + |Sp, age = t),
β(t) =
1
1 + exp(−b0 − b1 ∗ (t − m))
.
Transition density S0 → Sp: 0.3 * lognormal pdf
w(t|µ, σ2
) =
0.3
√
2πσt
exp −(log t − µ)2
/(2σ2
) , σ > 0.
Sojourn time: Log logistic distribution
q(t) =
κtκ−1ρκ
[1 + (tρ)κ]2
, κ > 0, ρ > 0.
The distribution of the life span fT (t) was derived from the
period life table, Social Security Administration.
http://www.ssa.gov/OACT/STATS/table4c6.html
Wu screening effects
21. Motivation and Goal
NLST Data and Probability Model
Application
Results and Summary
Figure 1: Conditional PDF of lifetime for both gender in
the US
x
(ft.F60+ft.M60)/2
60 70 80 90 100 110 120
0.00.04
Conditional lifetime PDF for both genders with t_current=60
x
(ft.F70+ft.M70)/2
70 80 90 100 110 120
0.00.04
Conditional lifetime PDF for both genders with t_current=70
x
(ft.F80+ft.M80)/2
80 90 100 110 120
0.00.04
Conditional lifetime PDF for both genders with t_current=80
Wu screening effects
22. Motivation and Goal
NLST Data and Probability Model
Application
Results and Summary
Application to the NLST - CT Data
Let CT = NLST CT data, and θ = (b0, b1, µ, σ2, κ, ρ). The
posterior predictive probability for each outcome is
P(Case i|T > tK1 , HK1 , CT)
= P(Case i, θ|T > tK1 , HK1 , CT)dθ
= P(Case i|T > tK1 , HK1 , θ)f (θ|CT)dθ
≈
1
n
n
j=1
P(Case i|T > tK1 , HK1 , θ∗
j )
θ∗
j : 800 Posterior samples from MCMC.
Wu screening effects
23. Motivation and Goal
NLST Data and Probability Model
Application
Results and Summary
Table 4. Projection of lung cancer scr. outcomes using NLST-CT data
(∆1, ∆2)a
Pb
(SympF) P(NoED) P(TrueED) P(OverD)
Initial screen age t0 = 50, current age tK1 = 60
(1 yr, 1 yr) 80.26(0.57) 1.86(0.28) 17.19(0.60) 0.66(0.09)
(2 yr, 1 yr) 80.05(0.56) 1.86(0.27) 17.40(0.60) 0.66(0.09)
(1 yr, 2 yr) 80.50(0.57) 6.31(0.63) 12.74(0.55) 0.42(0.08)
(2 yr, 2 yr) 80.29(0.57) 6.30(0.63) 12.96(0.56) 0.43(0.08)
Initial screen age t0 = 50, current age tK1 = 70
(1 yr, 1 yr) 86.13(0.39) 1.30(0.24) 11.90(0.42) 0.67(0.09)
(2 yr, 1 yr) 85.64(0.42) 1.31(0.24) 12.38(0.44) 0.67(0.09)
(1 yr, 2 yr) 86.38(0.39) 4.33(0.44) 8.86(0.45) 0.42(0.08)
(2 yr, 2 yr) 85.88(0.41) 4.33(0.44) 9.36(0.50) 0.43(0.08)
Initial screen age t0 = 50, current age tK1 = 80
(1 yr, 1 yr) 94.20(0.26) 0.55(0.16) 4.73(0.23) 0.52(0.08)
(2 yr, 1 yr) 93.75(0.30) 0.57(0.17) 5.16(0.27) 0.53(0.09)
(1 yr, 2 yr) 94.38(0.25) 1.75(0.21) 3.54(0.26) 0.34(0.07)
(2 yr, 2 yr) 93.92(0.28) 1.76(0.23) 3.97(0.31) 0.36(0.07)
a
∆1, ∆2 are scr. interval in history and in future correspondingly.
b
The mean probability (with standard error) are in percentage.
Wu screening effects
24. Motivation and Goal
NLST Data and Probability Model
Application
Results and Summary
Table 5. Estimated probability of over-diagnosis in screen-detected cases
(with 95% C.I.)
(∆1, ∆2) P(TrueED|De
) P(OverD|D)
Initial screen age t0 = 50, current age tK1 = 60
(1 yr, 1 yr) 96.31 (95.10, 97.12) 3.69 (2.88, 4.90)
(2 yr, 1 yr) 96.35 (95.17, 97.15) 3.65 (2.85, 4.83)
(1 yr, 2 yr) 96.78 (95.62, 97.51) 3.22 (2.49, 4.38)
(2 yr, 2 yr) 96.83 (95.70, 97.54) 3.17 (2.46, 4.30)
Initial screen age t0 = 50, current age tK1 = 70
(1 yr, 1 yr) 94.68 (93.01, 95.79) 5.32 (4.21, 6.99)
(2 yr, 1 yr) 94.85 (93.26, 95.90) 5.15 (4.10, 6.74)
(1 yr, 2 yr) 95.45 (93.87, 96.46) 4.55 (3.54, 6.13)
(2 yr, 2 yr) 95.62 (94.12, 96.58) 4.38 (3.42, 5.88)
Initial screen age t0 = 50, current age tK1 = 80
(1 yr, 1 yr) 90.21 (87.32, 92.16) 9.79 (7.84, 12.68)
(2 yr, 1 yr) 90.71 (87.98, 92.51) 9.29 (7.49, 12.02)
(1 yr, 2 yr) 91.29 (88.55, 93.06) 8.71 (6.94, 11.45)
(2 yr, 2 yr) 91.82 (89.26, 93.44) 8.18 (6.56, 10.74)
e
Event D = {cancer was diagnosed at regular scheduled exam}
Wu screening effects
25. Motivation and Goal
NLST Data and Probability Model
Application
Results and Summary
Summary of Table 4
The probability of symptom-free-life is stable within each age
group; it increases as current age increases, about 80-95% for all
age groups.
The percentage of Over-diagnosis in the whole population (not
“False Positive”) is small, about 0.34-0.67%. It decreases slightly as
age advances and as screen interval increases.
The probability of true-early-detection decreases as future
screening interval ∆2 increases and as current age increases.
The probability of no-early-detection increases as future ∆2
increases; but it decreases as current age increases.
Wu screening effects
26. Motivation and Goal
NLST Data and Probability Model
Application
Results and Summary
Summary of Table 5
Among the screen-detected cases, the probability of
over-diagnosis is increasing as current age increases (from
3% to 9%). It is about 8∼9% for the 80-years-old, with 95%
CI (6%, 12%).
Among the screen-detected cases, the probability of
true-early-detection is decreasing as current age increases,
from 96% to 90%.
This study provides a systematic approach to assess the
outcomes of regular screening for old age with a screening
history.
Wu screening effects
27. Motivation and Goal
NLST Data and Probability Model
Application
Results and Summary
Acknowledgements and References
I want to thank Beth Levitt, Tom Riley and Jerome Mabie of
IMS for helping organize the NLST-CT data, and Ruiqi Liu of UL
for providing MCMC posterior samples for this study.
Thank you!
Wu D, Kafadar K, and Rai S. (2015). Inference of future screening
outcomes for older age group with a screening history. Draft.
Wu D and Rosner GL(2014). Inference of long term effects and
over-diagnosis in periodic cancer screening. Statistica Sinica. 24,
815-831.
Wu D, Kafadar K, Rosner GL and Broemeling LD(2012). The lead
time distribution when lifetime is subject to competing risks in
cancer screening. The International Journal of Biostatistics, 8:1,
Article 6, April 2012.
Wu screening effects