Hsini (Terry) Liao, Ph.D., Yun Lu, Hong Wang, “Meta-Analysis of Time-to-Event Survival Curves in Drug Eluting Stent Data”, Abstract No 304048, Joint Statistical Meetings, Session No 205, Washington D.C., August 2009
2008 JSM - Meta Study Data vs Patient DataTerry Liao
Hsini (Terry) Liao, Ph.D., Yun Lu, Hong Wang, “Comparison of Individual Patient-Level and Study-Level Meta-Analyses Using time to Event Analysis in Drug-Eluting Stent Data”, Abstract No 301037, Joint Statistical Meetings, Session No 90, Denver, CO, August 2008
This document discusses state space methods for time series analysis and forecasting. It begins by introducing the basic state space model framework, which represents a time series using unobserved states that evolve over time according to a state equation and generate observations according to an observation equation. The document then provides examples of how various time series models, such as regression models with time-varying coefficients, ARMA models, and univariate component models can be expressed as state space models. Finally, it introduces the Kalman filter algorithm, which provides a recursive means of estimating the unobserved states from the observations.
This document describes a collapsed dynamic factor analysis model for macroeconomic forecasting. It summarizes that multivariate time series models can more accurately capture relationships between economic variables compared to univariate models. The document then presents a collapsed dynamic factor model that relates a target time series (yt) to unobserved dynamic factors (Ft) estimated from related macroeconomic data (gt). Out-of-sample forecasting experiments on US personal income and industrial production data demonstrate the model achieves more accurate point forecasts than univariate benchmarks like random walk or AR(2) models.
Survival analysis of time to first birth after marriageAlexander Decker
This study analyzed survey data on the time to first birth after marriage for women in Ghana. The results showed that:
1) About 74% of women had their first birth within the first three years of marriage, with the highest chances of giving birth in the second and third years of marriage.
2) Educational level, region of residence, and whether a woman had an abortion were found to be significant factors influencing time to first birth according to log-rank tests.
3) While most women had their first child relatively early in marriage, some did not give birth for over 10 years, with chances of first birth increasing again after 20 years of marriage.
This document analyzes survival data for 469 kidney transplant patients using Cox proportional hazards and accelerated failure time models. Both semi-parametric (Cox PH) and parametric (AFT) methods are used. The AFT model with a gamma distribution provides the best fit to the data. Backward selection identifies age, diabetes status, and use of an immune drug (ALG) as the most significant covariates predicting graft survival time. While the Cox model is more commonly used, the AFT model is also appropriate and its parameters have a clearer interpretation as effects on survival time.
This document provides information about life expectancy in different countries according to WHO reports from 2011 and 2009. It also includes quotes from famous individuals about living to 100 years old. The rest of the document discusses the history and methodology of life tables, including how Edmond Halley constructed one of the first life tables over 300 years ago to analyze mortality data. Life tables are used to calculate various demographic indicators like life expectancy, survival rates, and population projections by age. While an older statistical tool, life tables continue to be useful for government and healthcare planning.
A usability evaluation framework for b2 c e commerce websitesAlexander Decker
This document presents a framework for evaluating the usability of B2C e-commerce websites. It involves user testing methods like usability testing and interviews to identify usability problems in areas like navigation, design, purchasing processes, and customer service. The framework specifies goals for the evaluation, determines which website aspects to evaluate, and identifies target users. It then describes collecting data through user testing and analyzing the results to identify usability problems and suggest improvements.
The present study made an attempt to gain insights on determinants and psychosocial consequences of early marriage on rural women. Samples of 300 women who married early and have completed 5 to 15 years of married life were taken from 20 villages of district Bhilwara, Rajasthan as it has highest instances of child marriages. In depth investigation employed the use of interview, FGDs, observation and case study method. Research was based in district. Baseline Proforma and SES scale (self developed) was used to get the necessary details regarding the socio-economic status and demographic characteristics of respondents and their families. DEM scale (self developed), PSC Scale (self developed) and life satisfaction scale (Alam & Shrivastava, 1973) were used for data collection. Statistical test i.e. ‘z’ test, ANOVA, Regression & Pearson’s ‘r’ were applied to find out the results.
The findings of the study revealed that age at marriage is governed by various components of socio-economic status with traditions & customs, lack of education, childhood residence and castes. Effect of mass media was not found as hypothesized. Media is only meant for entertainment by rural people. The study also highlighted psycho-social consequences (PSC components) of early marriage. It was found that child marriage increases exploitation of girl child and loss of her adolescence along with denial of education & freedom, inadequate socialization & personal development and violence & abandonment. Access to contraception is highly correlated with age at marriage i.e. the lower the age at marriage lower the knowledge and less access of contraception.
The multiple regression analysis in predicting age at marriage and its determinants reveal that the Beta coefficient reflect the socio- economic status of the family and in which a girl belongs has more considerable contribution in terms of early marriage while traditions and customs follow the socio-cultural perseverance in predicting age at marriage. It is also depicted from the regression analysis that the ill consequences of early marriage in earlier ages have more awful effects on girl child. On the whole, it was found that early marriage itself means exploitation of girl child and loss of adolescence. This factor is highly significant in all studied age groups. They are treated as homely bird which means confined to four walls of house. Overall dissatisfaction level is high with the respondents who get married at the early age. There are significant correlation found between determinants and psycho-social consequences of early marriage and inter-correlation among LS and SES components.
2008 JSM - Meta Study Data vs Patient DataTerry Liao
Hsini (Terry) Liao, Ph.D., Yun Lu, Hong Wang, “Comparison of Individual Patient-Level and Study-Level Meta-Analyses Using time to Event Analysis in Drug-Eluting Stent Data”, Abstract No 301037, Joint Statistical Meetings, Session No 90, Denver, CO, August 2008
This document discusses state space methods for time series analysis and forecasting. It begins by introducing the basic state space model framework, which represents a time series using unobserved states that evolve over time according to a state equation and generate observations according to an observation equation. The document then provides examples of how various time series models, such as regression models with time-varying coefficients, ARMA models, and univariate component models can be expressed as state space models. Finally, it introduces the Kalman filter algorithm, which provides a recursive means of estimating the unobserved states from the observations.
This document describes a collapsed dynamic factor analysis model for macroeconomic forecasting. It summarizes that multivariate time series models can more accurately capture relationships between economic variables compared to univariate models. The document then presents a collapsed dynamic factor model that relates a target time series (yt) to unobserved dynamic factors (Ft) estimated from related macroeconomic data (gt). Out-of-sample forecasting experiments on US personal income and industrial production data demonstrate the model achieves more accurate point forecasts than univariate benchmarks like random walk or AR(2) models.
Survival analysis of time to first birth after marriageAlexander Decker
This study analyzed survey data on the time to first birth after marriage for women in Ghana. The results showed that:
1) About 74% of women had their first birth within the first three years of marriage, with the highest chances of giving birth in the second and third years of marriage.
2) Educational level, region of residence, and whether a woman had an abortion were found to be significant factors influencing time to first birth according to log-rank tests.
3) While most women had their first child relatively early in marriage, some did not give birth for over 10 years, with chances of first birth increasing again after 20 years of marriage.
This document analyzes survival data for 469 kidney transplant patients using Cox proportional hazards and accelerated failure time models. Both semi-parametric (Cox PH) and parametric (AFT) methods are used. The AFT model with a gamma distribution provides the best fit to the data. Backward selection identifies age, diabetes status, and use of an immune drug (ALG) as the most significant covariates predicting graft survival time. While the Cox model is more commonly used, the AFT model is also appropriate and its parameters have a clearer interpretation as effects on survival time.
This document provides information about life expectancy in different countries according to WHO reports from 2011 and 2009. It also includes quotes from famous individuals about living to 100 years old. The rest of the document discusses the history and methodology of life tables, including how Edmond Halley constructed one of the first life tables over 300 years ago to analyze mortality data. Life tables are used to calculate various demographic indicators like life expectancy, survival rates, and population projections by age. While an older statistical tool, life tables continue to be useful for government and healthcare planning.
A usability evaluation framework for b2 c e commerce websitesAlexander Decker
This document presents a framework for evaluating the usability of B2C e-commerce websites. It involves user testing methods like usability testing and interviews to identify usability problems in areas like navigation, design, purchasing processes, and customer service. The framework specifies goals for the evaluation, determines which website aspects to evaluate, and identifies target users. It then describes collecting data through user testing and analyzing the results to identify usability problems and suggest improvements.
The present study made an attempt to gain insights on determinants and psychosocial consequences of early marriage on rural women. Samples of 300 women who married early and have completed 5 to 15 years of married life were taken from 20 villages of district Bhilwara, Rajasthan as it has highest instances of child marriages. In depth investigation employed the use of interview, FGDs, observation and case study method. Research was based in district. Baseline Proforma and SES scale (self developed) was used to get the necessary details regarding the socio-economic status and demographic characteristics of respondents and their families. DEM scale (self developed), PSC Scale (self developed) and life satisfaction scale (Alam & Shrivastava, 1973) were used for data collection. Statistical test i.e. ‘z’ test, ANOVA, Regression & Pearson’s ‘r’ were applied to find out the results.
The findings of the study revealed that age at marriage is governed by various components of socio-economic status with traditions & customs, lack of education, childhood residence and castes. Effect of mass media was not found as hypothesized. Media is only meant for entertainment by rural people. The study also highlighted psycho-social consequences (PSC components) of early marriage. It was found that child marriage increases exploitation of girl child and loss of her adolescence along with denial of education & freedom, inadequate socialization & personal development and violence & abandonment. Access to contraception is highly correlated with age at marriage i.e. the lower the age at marriage lower the knowledge and less access of contraception.
The multiple regression analysis in predicting age at marriage and its determinants reveal that the Beta coefficient reflect the socio- economic status of the family and in which a girl belongs has more considerable contribution in terms of early marriage while traditions and customs follow the socio-cultural perseverance in predicting age at marriage. It is also depicted from the regression analysis that the ill consequences of early marriage in earlier ages have more awful effects on girl child. On the whole, it was found that early marriage itself means exploitation of girl child and loss of adolescence. This factor is highly significant in all studied age groups. They are treated as homely bird which means confined to four walls of house. Overall dissatisfaction level is high with the respondents who get married at the early age. There are significant correlation found between determinants and psycho-social consequences of early marriage and inter-correlation among LS and SES components.
1. Economists care about standard errors in addition to point estimates as they allow measuring confidence in results. However, standard errors are often calculated incorrectly when observations are not independent or identically distributed.
2. Survey data is often clustered, stratified, and uses sampling weights, violating assumptions of independence and identical distribution. This can lead to standard errors that are too low if not properly accounted for.
3. Robust standard errors that account for survey design are important for many commonly used surveys to obtain accurate measures of confidence in results. Failing to use robust standard errors can result in incorrect statistical inference.
On Foundations of Parameter Estimation for Generalized Partial Linear Models ...SSA KPI
1) The document discusses estimation methods for generalized linear models (GLMs) and generalized partial linear models (GPLMs). 2) GPLMs extend GLMs by adding a single nonparametric component to the linear predictor. 3) Parameter estimation for GPLMs is performed by maximizing a penalized likelihood function, where the penalty term controls the tradeoff between model fit and smoothness of the nonparametric component. 4) An iterative algorithm such as Newton-Raphson is used to solve the penalized maximum likelihood estimation problem.
The document discusses mixed model analysis for repeated measures data. It provides an example dataset from a blood pressure trial with 10 patients measured at 1-2 timepoints under placebo or active substance. Student's t-test assuming independence violates assumptions when applied to this dataset. Mixed models that account for dependency between observations, such as compound symmetry, provide a better analysis. Two common mixed models are presented - covariance pattern models that model covariance structure and random coefficients models that model random effects. Mixed models can handle different endpoint types and be analyzed in standard statistical packages.
1) The document describes a Holonic Workforce Allocation Model (HWM) that uses concepts from Holonic Manufacturing Systems to determine workforce sizing and worker selection.
2) The HWM has two levels - a quantitative, pre-active level called Workforce Sizing Plan that forecasts worker needs, and a qualitative, reactive level called Worker Selection Guide that assigns workers to tasks based on skills and task urgency.
3) The HWM aims to improve factory processes by ensuring effective workforce allocation through real-time control, event-driven control, intelligent control, and distributed control.
Xi Zhang presented their Ph.D. dissertation which analyzed functional regression models and their application to high-frequency financial data. The presentation included:
1. An introduction to functional data analysis and the use of intraday cumulative return curves from stock price data.
2. A simulation study comparing predictive methods in functional autoregressive models, finding the estimated kernel method performed well.
3. An application of functional extensions of the Capital Asset Pricing Model to predict intraday return curves, finding simpler models with intercepts had better predictive performance than more complex models.
This document summarizes a webinar presentation about adaptive sample size re-estimation for confirmatory time-to-event trials. The presentation discusses a motivating lung cancer trial example and introduces a promising zone design where the sample size is increased only if interim results fall within a promising zone. It demonstrates the design, simulation, and interim monitoring capabilities of East®SurvAdapt software. Key aspects of the adaptive design methodology are discussed, including conditional power calculations, maintaining type 1 error control, and balancing sample size increases with trial duration.
1. Economists care about standard errors in addition to point estimates as they allow measuring confidence in results. However, standard errors are often calculated incorrectly when observations are not independent or identically distributed.
2. Survey data is often clustered, stratified, and uses sampling weights, violating assumptions of independence and identical distribution. This can lead to standard errors that are too low if not properly accounted for.
3. Robust standard errors that account for survey design are important for many commonly used surveys to obtain accurate measures of confidence in results. Failing to use robust standard errors can result in incorrect statistical inference.
On Foundations of Parameter Estimation for Generalized Partial Linear Models ...SSA KPI
1) The document discusses estimation methods for generalized linear models (GLMs) and generalized partial linear models (GPLMs). 2) GPLMs extend GLMs by adding a single nonparametric component to the linear predictor. 3) Parameter estimation for GPLMs is performed by maximizing a penalized likelihood function, where the penalty term controls the tradeoff between model fit and smoothness of the nonparametric component. 4) An iterative algorithm such as Newton-Raphson is used to solve the penalized maximum likelihood estimation problem.
The document discusses mixed model analysis for repeated measures data. It provides an example dataset from a blood pressure trial with 10 patients measured at 1-2 timepoints under placebo or active substance. Student's t-test assuming independence violates assumptions when applied to this dataset. Mixed models that account for dependency between observations, such as compound symmetry, provide a better analysis. Two common mixed models are presented - covariance pattern models that model covariance structure and random coefficients models that model random effects. Mixed models can handle different endpoint types and be analyzed in standard statistical packages.
1) The document describes a Holonic Workforce Allocation Model (HWM) that uses concepts from Holonic Manufacturing Systems to determine workforce sizing and worker selection.
2) The HWM has two levels - a quantitative, pre-active level called Workforce Sizing Plan that forecasts worker needs, and a qualitative, reactive level called Worker Selection Guide that assigns workers to tasks based on skills and task urgency.
3) The HWM aims to improve factory processes by ensuring effective workforce allocation through real-time control, event-driven control, intelligent control, and distributed control.
Xi Zhang presented their Ph.D. dissertation which analyzed functional regression models and their application to high-frequency financial data. The presentation included:
1. An introduction to functional data analysis and the use of intraday cumulative return curves from stock price data.
2. A simulation study comparing predictive methods in functional autoregressive models, finding the estimated kernel method performed well.
3. An application of functional extensions of the Capital Asset Pricing Model to predict intraday return curves, finding simpler models with intercepts had better predictive performance than more complex models.
This document summarizes a webinar presentation about adaptive sample size re-estimation for confirmatory time-to-event trials. The presentation discusses a motivating lung cancer trial example and introduces a promising zone design where the sample size is increased only if interim results fall within a promising zone. It demonstrates the design, simulation, and interim monitoring capabilities of East®SurvAdapt software. Key aspects of the adaptive design methodology are discussed, including conditional power calculations, maintaining type 1 error control, and balancing sample size increases with trial duration.
1. Meta-Analysis of Time-to-Event Survival
Curves in Drug-Eluting Stent Data
Hsini (Terry) Liao*, PhD
Yun Lu, MSc
Hong Wang, MSc
Boston Scientific Corporation
*Contact: terry.liao@bsci.com
JSM 2009 1
2. Outlines
• Motivation
• Meta-Analysis Overview
• Application to Survival Curves
• Case Study and Simulation
• Summary and Future Work
• References
JSM 2009 2
3. Motivation
• Meta-analysis provides a structure of consolidating
the outcomes from several studies and deriving
statistical inference of the outcomes
• Meta-analysis of time-to-event data is less common
than meta-analysis of binary or continuous data
• Fixed effect vs. random effects models
• Patient-level vs. study-level data
• Hazard ratio (HR) vs. Kaplan-Meier (KM) curve
• Different follow-up schedules
JSM 2009 3
4. Motivation (Cont’d)
Sutton, A.J., Higgins, J.P. “Recent Developments in Meta-Analysis”, Stat in Med. 2008; 27:625-650
Meta- Analysis” 27:625-
JSM 2009 4
5. Meta-Analysis Overview
• A systematic review of literature to measure the
effect size
• Single study/effect
• Many studies/narrative review
• Effect magnitude/adequate precision
• Combine the effects to give overall mean effect
• Effect size: event rate, OR, RR, HR, etc.
• Sample size/standard error to assign weight
JSM 2009 5
6. (Current) Application to
Survival Curves
• Extract data from KM curves
• Estimate ln(HRij) and var[ln(HRij)] for each
study
• The HR is a summary of the difference
between two KM curves
• Consider time-to-event and censoring,
otherwise HR=RR
• Variety of scenarios (e.g. CI)
• MS Excel spreadsheet computation
JSM 2009 6
7. HR Matrix
Study 1 HR11 HR12 HR13 . . . HR1J
HR21 HR22 HR23 . . . HR2J
Study 2
. . . . .
. . . . .
. . . . .
Study K HRK1 HRK2 HRK3 . . . HRKJ
Time 1 Time 2 Time 3 . . . Time J
JSM 2009 7
8. Application to Survival Curves
(Cont’d)
• Formal definition of the log hazard ratio
• Reported number of observed events and
number of expected events:
For each study i and each time j,
OTij / ETij
ln( HRij ) = ln( )
OCij / ECij
1 1
var[ln(HRij )] = +
ETij ECij
JSM 2009 8
9. Application to Survival Curves
(Cont’d)
For each study i=1,…,K and each time point j,
K
∑ var[ln(HR )]
ln( HRij )
ij
ln( HR⋅ j ) = i =1
K
∑i =1
1
var[ln(HRij )]
−1
⎡ K ⎤
var[ln(HR⋅ j )] = ⎢
⎢ ∑
⎢ i =1
1 ⎥
var[ln(HRij )] ⎥
⎥
⎣ ⎦
JSM 2009 9
10. Most Scenarios
• Reported the initial #patient at risk and
observed event count for each time point,
or equivalent information
• Take censoring into account if applicable
• Able to estimate expected events
JSM 2009 10
11. Method 1: Proposed
(Overall Observed and Expected Events)
• For each study i, compute the sum of observed
events (OTi., OCi.) and the sum of expected events
(ETi., ECi.) across all time points
• Use the formal definition of the log hazard ratio
OTi⋅ / ETi⋅
ln( HRi⋅ ) = ln( )
OCi⋅ / ECi⋅
1 1
var[ln(HRi⋅ )] = +
ETi⋅ ECi⋅
JSM 2009 11
12. Method 2: Parmar
(Observed and Expected Events)
• For each study i, compute log(HR) and associated
variance for each time point with observed events
(OTij, OCij) and expected events (ETij, ECij)
• Calculate the weighted mean of log(HR) across time
points for each study i
OTij / ETij
ln( HRij ) = ln( )
OCij / ECij
1 1
var[ln(HRij )] = +
ETij ECij
JSM 2009 12
13. Method 3: Williamson
(Observed Events and #Patient-At-Risk)
• For each study i, compute log(HR) and associated
variance for each time point with the observed
events (OTij, OCij) and #patient-at-risk (NTij, NCij)
• Calculate the weighted mean of log(HR) across
time points for each study i
OTij / NTij
ln( HRij ) = ln( )
OCij / NCij
1 1 1 1
var[ln(HRij )] = − + −
OTij NTij OCij NCij
JSM 2009 13
14. Case Study: Stent Data
• Study outcome: Target Vessel Revascularization (TVR)
• Post-hoc analysis set: diabetic vs. non-diabetic patients with
drug-eluting stent
• Propensity score adjustment (“like-to-like”): 1-to-1 match
• Data: total 1,554 DES patients over 7 studies
• Study 1: (n=308) 5 years
• Study 2: (n=352) 4 years
• Study 3: (n= 76) 5 years
• Study 4: (n=436) 3 years
• Study 5: (n= 84) 2 years
• Study 6: (n=186) 2 years
• Study 7: (n=112) 2 years
• The hazard ratio estimate of study outcomes from patient level
data is compared with that from the study level data to assess
the treatment effect in DES patients (Diabetic vs. Non-
Diabetic).
JSM 2009 14
15. HR Matrix
(Calculation Using Formal Definition)
Year 1 Year 2 Year 3 Year 4 Year 5
Study 1 1.99 1.27 1.07 6.32 0.74
Study 2 0.91 2.48 1.39 0.11 NA
Study 3 1.33 2.06x106 * 0.28 1.03 1.07
Study 4 2.23 1.44 2.18 NA NA
Study 5 1.40 1.06 NA NA NA
Study 6 1.24 3.90 NA NA NA
Study 7 1.20 1.00 NA NA NA
NA = Not Available
* Due to zero event in non-diabetic arm, using tiny number (10-6) instead of zero to make formula work. 15
JSM 2009
16. Comparison of Estimates for
Overall HR
Fixed Effect Model Random Effects Model
HR [95% CI] HR [95% CI]
Method1
1.31 [1.03, 1.67] 1.34 [0.99, 1.81]
(Proposed)
Method 2
1.31 [1.03, 1.66] 1.46 [0.90, 2.37]
(Parmar)
Method 3
1.31 [1.04, 1.67] 1.33 [1.02, 1.75]
(Williamson)
Cox Model in
1.32 [1.03, 1.67] 1.33 [0.98, 1.82]
IPD
JSM 2009
IPD = Individual Patient Data 16
17. Simulation
• Simulation of KM curves in terms of
numbers of patient at risk, censoring and
events for 5 years
• Meta-analysis of 10 studies
• Two-arm with initial sample size ratio 1:1
• Event rates are centered at 13%, 16%,
20%, 23%, and 24% for treatment arm
based on pooling historical data
• Generated 1,000 times
JSM 2009 17
18. Simulation (Cont’d)
• Varying the first year event rate difference
ranging from 0.2% to 4% (treatment effect)
• Varying heterogeneity in terms of standard
deviation of event rates of 2%, 3%, 4%, 5%
• Calculated the coverage probability defined as
the percentage of 95% CIs that contain the
true underlying value of the log HR over
1,000 simulated runs
JSM 2009 18
21. Result of Simulation
• Small heterogeneity
- For small or large treatment effect, all
three methods perform well
• Increasing heterogeneity
- Bias increases
- Coverage decreases. Some values
decrease dramatically
- For large treatment effect, coverage
decreases less for proposed method
JSM 2009 21
22. Summary: Meta-Analysis of
Kaplan-Meier Curves
• Reported Kaplan-Meier curves with
#patient at risk at baseline
• Read off survival probabilities at each
time point
• Estimate the minimum and the
maximum follow-up time
• Assumption for distribution of censored
subjects: Missing at random (uniform)?
JSM 2009 22
23. HR Matrix Different Follow-Up
Study 1 HR11 HR12 HR13 HR1J
HR21 HR22 HR23
Study 2
. .
. . . . . . . .
. .
Study K HRK1 . . . . . . HRKJ
Time 1 Time 2 Time 3 . . . Time J
JSM 2009 23
24. Future Work
• Missing value imputation for the HR matrix
• Constant HRs over time
• Test of equality for all non-missing HRij over
time (within each study)
• Inclusion/exclusion criteria
• Distribution of censoring
• Summary KM curve of many KM curves
JSM 2009 24
25. References
• Parmar, M.K.B., Torri, V. and Stewart, L. “Extracting
Summary Statistics to Perform Meta-Analyses to the
Published Literature for Survival Endpoints”, Stat in Med.
1998; 17:2815-2834
• Tierney, J.F., Stewart, L.A., Ghersi, D., Burdett, S. and
Sydes, M.R. “Practical Methods for Incorporating Summary
Time-to-Event Data into Meta-Analysis”, Trials 2007; 8:16
• Arends, L.R., Hunink, M.G.M. and Stijnen, T. “Meta-Analysis
of Summary Survival Data”, Stat in Med. 2008; 27:4381-4396
• Williamson, P.R., Smith, C.T., Hutton, J.L. and Marson, A.G.
“Aggregate Data Meta-Analysis with Time-to-Event
Outcomes”, Stat in Med. 2002; 21:3337-3351
• Sutton, A.J., Higgins, J.P. “Recent Developments in Meta-
Analysis”, Stat in Med. 2008; 27:625-650
JSM 2009 25