1.
Methodology Research Group Evaluation of moderation and mediation in thedevelopment of personalised therapies (stratified medicine) MHRN conference, London, 20 March 2013 Sabine Landau, Institute of Psychiatry, King’s College London & Graham Dunn, Institute of Population Health, University of Manchester
2.
Outline Methodology Research Group1. Introduction to key concepts • What is personalised therapy/ stratified medicine? Sabine • Causal effects, confounding and RCTs • Treatment effect moderation • Treatment effect mediation2. Recap and development of ideas • Correct and incorrect approaches to treatment effect moderation (stratification) • Using moderator (predictive marker) by Graham treatment interactions as instruments for mediation investigations
3.
Research Programme:Efficacy and MechanismsEvaluation Methodology Research GroupFunded by MRC Methodology Research Programme• Design and methods of explanatory (causal) analysis for randomised trials of complex interventions in mental health (2006-2009) – Graham Dunn (PI), Linda Davies, Jonathan Green, Andrew Pickles, Chris Roberts, Ian White & Frank Windmeijer.• Estimation of causal effects of complex interventions in longitudinal studies with intermediate variables (2009-2012) – Richard Emsley (MRC Fellow), Graham Dunn.• Designs and analysis for the evaluation and validation of social and psychological markers in randomised trials of complex interventions in mental health (2010-12) – Graham Dunn (PI), Richard Emsley, Linda Davies, Jonathan Green, Andrew Pickles, Chris Roberts, Ian White & Frank Windmeijer with Hanhua Liu.• Developing methods for understanding mechanism in complex interventions (2013-16) – Sabine Landau (PI), Richard Emsley, Graham Dunn, Ian White, Paul Clarke, Andrew Pickles & Til Wykes.
4.
Aims of Session 1 Methodology Research Group• To provide an introduction to causal inference using potential outcomes (counterfactuals).• To show that the concepts of stratified medicine and treatment effect moderation are intrinsically linked to treatment effect heterogeneity.• To describe some standard approaches to evaluating treatment-effect mechanisms including the key assumptions, and highlight some of the potential problems with this.• To briefly describe some newer approaches to mechanism evaluation so that you are familiar with these concepts and their potential.
5.
Example 1: efficacy andmechanisms evaluation andpersonalised medicine Methodology Research Group• Parenting training may be effective at improving conduct of children with behavioural problems, but its effect might be greater in some children than in others.• Similarly, the training is likely to improve aspects of parenting and, again, its effect on such parent outcomes is likely to vary from one patient to another.• We might expect that if one parent‟s parenting has been improved considerably more than that of another parent then the conduct of the first parent‟s child has been improved more than that of the second parent‟s child. – Who are parenting training programmes effective for? – What proportion of the training programme effect on child conduct is explained by its effect on parenting practice?
6.
Example 2: efficacy andmechanisms evaluation andpersonalised medicine Methodology Research Group• A recent large-scale randomised controlled trial (RCT) provided evidence for the effectiveness of augmentation of antidepressant medication with cognitive behavioural therapy (CBT) as a next-step for patients whose depression has not responded to pharmacotherapy (Wiles et al, 2012).• Thus the treatment (CBT) was shown to work for a subpopulation who were identified as “non-responders to antidepressants”.• CBT is supposed to work by changing the way how people think about themself, the world and other people. – Who does CBT work for? – What proportion of the CBT effect on depressive symptoms is explained by its effect on cognition?
7.
General principle of causal inference Methodology Research Group• Effect size estimates (correlations, regression coefficients, odds ratios etc.) can only tell us about association between two variables (say X and Y).• The aim of causal inference is to infer whether this association can be given a causal interpretation (e.g. X causes Y) by: – defining the causal parameters, – being explicit about the assumptions made when using a respective estimators, – thinking about other possible explanations for observed effects, especially confounding.
8.
Ideas of causality (Cox and Wermuth, 2001) Methodology Research Group• Causality as a stable association – An observed association that cannot be accounted for by any postulated confounder(s) » (but, on its own, this says nothing about the direction of the causal effect)• Bradford Hill‟s criteria – A series of conditions which make the hypothesis of causality more convincing » (but none are either necessary or sufficient to prove causality)• Causality as an effect of an intervention – Potential Outcomes/Counterfactuals (Neyman, Rubin, etc.) – The idea of fixing (setting) the values of the explanatory variables (Pearl)• Causality as an explanation of a process – This is where science comes in…
9.
How can we formally definea causal treatment effect? Methodology Research Group • The potential outcomes/counterfactual approach. • It is a comparison between what is and what might have been. • We wish to estimate the difference between a patient‟s observed outcome and the outcome that would have been observed if, contrary to fact, the patient‟s treatment or care had been different (Neyman, 1923; Rubin, 1974). • Without the possibility of comparison the treatment effect is not well defined e.g. gender as a cause.
10.
Individual treatmenteffects (ITEs) Methodology Research Group • For a given individual, the effect of treatment is the difference: ITE=Outcometreatment - Outcomecontrol We can never observe this!
11.
Causal inference usingcounterfactuals Methodology Research GroupReceive treatment Receive control Measure outcome Measure outcome Comparison of outcomes gives an individual treatment effect
12.
Causal inference usingcounterfactuals Methodology Research GroupReceive treatment Receive control Measure outcome Measure outcome Comparison of outcomes will not give an individual treatment effect
13.
Average treatment effect(ATE) Methodology Research Group• The average treatment effect ATE is: Average[ITE] = Average[Outcometreatment - Outcomecontrol]• If the selection of treatment options is purely random (as in a perfect RCT) then: Ave[Outcometreatment - Outcomecontrol] = Ave[Outcometreatment|treatment] - Ave[Outcomecontrol|Control] = Ave[Outcome|treatment] - Ave[Outcome|Control]• ATE defines the efficacy of the treatment w. r. t. to control.
14.
Causal inference usingcounterfactuals Methodology Research GroupReceive treatment Receive control Measure outcome Measure outcome Comparison of average outcomes gives an average treatment effect
15.
Problem of confounding Methodology Research Group U Exposure Outcomes• Observed variables in squares, unobserved (latent) variables in circles.• An arrow (directed link) between variables represents a causal effect.• We are interested in the causal effect of Exposure on Outcome (black path)• U is an unmeasured confounder (=cause of Exposure and Outcome).• The confounder provides a backdoor path connecting Exposure and Outcome (red path)
16.
Why randomisation? Methodology Research Group• The strength of randomisation is that it ensures that there are no variables (both observed or unobserved) that drive treatment allocation.• In terms of a causal graph, there are no arrows into randomi- sation from any other variable, observed or unobserved: – Random treatment group is not a descendent of any other variable. – It is exogenous in the model with response=Outcome and covariate=Random treatment group.• This means that any comparison between randomisation groups (e.g. mean difference) estimates a (total) causal effect… – …provided the trial has been well designed and executed.
17.
Mendelian randomisation (from Davey-Smith 2011) Methodology Research Group• “The principle of Mendelian randomization relies on the basic (but approximate) laws of Mendelian genetics. If the probability that a postmeiotic germ cell, that has received any particular allele at segregation, contributes to a viable conception is independent of environment (following from Mendel‟s first law), and if genetic variants sort independently (following from Mendel‟s second law), then at a population level these variants will not be associated with the confounding factors that generally distort conventional observational studies.”• Basically, genotypes are entirely derived from parents but can be considered randomly allocated, – e.g. if both parents are type AB, then genotype could be AA (probability .25), AB (0.50) or BB (0.25).
18.
Mendelian randomisation (from Davey-Smith 2011) Methodology Research Group• Genotypes are equivalent to randomisation…• As before, in causal graph terms, there are no arrows into genes from any other variable, observed or unobserved: – Gene is not a descendent of any other variable. – It is exogenous in the model with response=Outcome and covariate=Gene.• This means that any comparison between genes (e.g. mean difference) estimates a (total) causal effect.
19.
Treatment effect heterogeneity Methodology Research Group• Importantly the definition of a causal parameter, the average causal effect (ATE) does not require that the ITEs are equal for everyone. Positive effect Detrimental effect Receivetreatment Receive control
20.
Personalised medicine andtreatment effect heterogeneity Methodology Research Group • The existence of variation in individual treatment effects (ITEs) is the foundation of personalised medicine. – Stratified medicine – Predictive medicine – Genomic medicine • If we are to pursue the idea of stratified medicine then we must believe in treatment effect heterogeneity. • We should therefore use statistical methodology that explicitly accounts for such causal effect heterogeneity.
21.
Baseline predictors Methodology Research Group • How does stratified medicine exploit treatment effect heterogeneity? • We are interested in knowing in advance of treatment allocation/decisions to treat who a treatment is most effective for. • For personalised medicine we need access to pre- treatment (baseline) characteristics that predict treatment-effect heterogeneity – We don‟t just want to predict outcome
22.
Moderators of treatment Methodology Research Group Baseline (pre-treatment) characteristics that influence the effect of treatment on outcome Random allocation Outcomes MarkerNote this path diagram is no longer a causal graph.We call such baseline variables a “marker” – for more see Section 2.
23.
Moderation assessment in trials Methodology Research Group• The ability of a baseline variable to act as a treatment moderator (also referred to as treatment effect modifier) can be investigated by assessing the interaction between treatment and the moderator variable in terms of the outcome.• When the treatment has been randomised then the causal effect of the treatment (its efficacy) within subpopulations defined by the level of the moderator can be estimated.• (In particular, randomisation within strata defined by the levels of the moderator maximises the power of this assessment.)
24.
Moderation assessment intreated cohorts Methodology Research Group• Often investigators look for outcome heterogeneity in a cohort of people who received the treatment and interpret such heterogeneity as evidence for moderation – E.g. for schizophrenics receiving a psychological therapy compare functioning between SCZ subtypes• This approach does not address the moderation question!• The approach assesses whether a baseline variable is predictive of outcome but NOT whether it is predictive of treatment effects.
25.
Prognostic baseline variables Methodology Research Group• Cohort studies of treated patients can only provide assessments of the ability of baseline variables to be predictive of the outcome; – That is whether they are prognostic variables.• They cannot say anything about the ability of baseline variables to predict treatment effects; – That is whether they are predictive (moderator) variables.• In personalised medicine we are after investigating moderators.• However, we may make use of prognostic variables to do this in a more powerful way (see Session 2).
26.
Treatment effect mediation Methodology Research Group• The aim of efficacy and mechanism investigations is to go beyond evaluating whether an intervention is effective and to explain why it might be efficacious: – What are the putative mechanisms through which the treatment acts?• Usual analysis methods dominated by decomposing total effects into direct and indirect effects: – Mental health and psychology has been concerned with this idea for decades. – Widely cited Baron and Kenny paper for mediation analysis in social sciences. – Makes implicit assumptions which are unlikely to hold.
27.
Simple mediation diagram Methodology Research Group Mediator Exposure Outcomes Total effect = direct effect + indirect effect
28.
Confounded mediation assessment in epidemiology Methodology Research Group U U Mediator Exposure Outcomes UIf treatment is not randomised then there is likely to be even moreunmeasured confounding.
29.
How does randomisationhelp? Methodology Research Group U U Mediator Random Outcomes allocation U “Blocked” by randomisation
30.
Mediation in trials Methodology Research Group U – the unmeasured confounders error U Mediator Random error Outcomes allocation Covariates
31.
Mediation in geneticepidemiology Methodology Research Group U – the unmeasured confounders error U Mediator Gene Outcomes error Covariates
32.
Possible solutions Methodology Research Group• There are basically two ways by which we can ensure that we can estimate causal parameters of interest in mechanisms investigations (direct and indirect treatment effects): – Measure and adjust for potential confounders (sounds obvious, not always done) … » so that there remains no hidden confounding and traditional Baron and Kenny mediation analysis approaches can be applied – Use estimators that can consistently estimate mediation parameters in the presence of hidden confounding … » a class of estimators called instrumental variables estimators allows for this » however, these also require assumptions (see below)
33.
Measuring confounders Methodology Research Group• This can be difficult when knowledge about underlying processes is only patchy.• However, when the putative confounder(s) are known it might be possible to obtain measures and thus enable causal mediation assessments even for only partly observed mediators.• Example – Immunology (Follman, 2006): » Trial to compare vaccination with HIV vaccine against controls » Putative mediator= immune response (only observed in the vaccinated group) » Interested in whether the vaccination effect on infection rate is mediated by the immune response
34.
Vaccine trials Methodology Research Group• It is easy to demonstrate that immune response is a correlate of protection in the vaccinated arm: the higher the response, the lower the infection rate.• Unfortunately, this correlation does not necessarily imply a causal effect. – Protection to infection specifically induced by the HIV vaccine is confounded with underlying levels of protection in the absence of vaccination. – Someone capable of producing a large immune response would be more resistant to infection, even in the absence of vaccination.
35.
“Strange result” Methodology Research Group• Confounding explained the strange result: – Immune response observed after HIV vaccination. » …though really what is being observed here is the combination of protection due to general and specific (HIV vaccine) factors – Antibody response to the HIV vaccination was strongly associated with infection risk in the vaccine group. » … though that could just be protection due to general factors correlating with infection risk – But NO effect of HIV vaccination on infection rate (large trial of approx. 5000 participants).• A correlate of protection is not necessarily a treatment-effect mediator, let alone a valid surrogate outcome.
36.
A hypothetical HIVvaccine trial (Follmann, 2006) Methodology Research Group• Vaccinate everyone before randomisation with an irrelevant vaccine (against Rabies, for example).• Measure the immune response to the Rabies vaccine (a proxy of protection due to general factors).• Randomly allocate participants to receive HIV vaccine or Placebo.• Measure immune response in the HIV vaccinated group.• Use response to the Rabies vaccine to (multiply) impute the missing HIV vaccine response in the Placebo participants.• Carry out a Baron and Kenny analysis on the imputed data which controls for the now observed confounder.
37.
Why do we need instrumental variables? Methodology Research Group• All available statistical methods we usually use (for any standard analysis), including: – Stratification – Regression – Matching – etc. require the one unverifiable condition we identified previously: NO UNMEASURED CONFOUNDING• Instrumental variables allow us to relax this assumption.
38.
Instrumental variables Methodology Research Group• For mediation assessment in a trial we are looking for a variable that is: 1. (Strongly) predictive of the intermediate variable; 2. Has no direct effect on the outcome, except through the intermediate variable; 3. Does not share common causes with the outcome.• If these conditions hold, in addition to one further assumption (no interactions or monotonicity), then such a variable can be used as an instrumental variable (IV).• Randomisation, where available, satisfies criteria 1 and 3.• If we consider this when designing the trial, we can measure variables that MIGHT meet these requirements.
39.
Mediation diagramwith instrumental variables Methodology Research Group error U Instruments Mediator Random error allocation Outcomes Covariates
40.
Possible instruments Methodology Research Group• The following variables might serve as instrumental variables to enable mediation investigations in trials: – Baseline variable x randomisation interactions (see Section 2) » E.g. Mother mental health x training programme interaction in parenting example – Trial x randomisation interaction in meta-analysis of trials – Randomly allocated non-standardised aspects of interventions » E.g. how and high intensity versions of therapy – Genes » An application of Mendelian randomisation where it is assumed that a gene determining the intermediate phenotype only affects the distal phenotype by changing the intermediate
41.
Mendelian randomisation:using genotype as an IV Methodology Research Group error U GENES Mediator Random allocation Outcomes error Covariates
42.
Assumptions for instrumentalvariables Methodology Research Group • IV methods require FOUR assumptions • The first 3 assumptions are from the definition: – The association between instrument and mediator. – No direct effect of the instrument on outcome. – No unmeasured confounding for the instrument and outcome. • There are a wide variety of fourth assumptions and different assumptions result in the estimation of different causal effects: – E.g. no interactions, monotonicity (no defiers).
43.
Instrumental variables:pros and cons Methodology Research Group Advantages Disadvantages 1. Can allow for unmeasured 1. It is impossible to verify that a confounding; variable is an instrument and using a non-instrument 2. Can allow for measurement introduces additional bias. error; 2. A weak instrument increases the bias over that of ordinary 3. Randomisation often meets regression (for finite samples). the definition so is an ideal instrument. 3. Instruments by themselves are actually insufficient to estimate causal effects and we require additional assumptions.See Hernán and Robins (2006), Epidemiology for further details
44.
Assumption trade-off Methodology Research Group• IV methods replace one unverifiable assumption of no unmeasured confounding between the intermediate variable and the outcome by other unverifiable assumptions – no unmeasured confounding for the instruments, or – no direct effect of the instruments.• We need to decide which assumptions are more likely to hold in our analysis.• An IV analysis will also decrease the precision of our estimates because of allowing for the unmeasured confounding.
45.
In the next session… Methodology Research Group• Combining all these ideas: – Using baseline moderator variables (predictive markers) for evaluation of treatment effect mechanisms. – Using prognostic baseline variables (markers) as confounders or instrumental variables. – Improved trial designs to evaluate treatment-effect heterogeneity and corresponding mediational mechanisms.• First we will have a short break…
46.
Methodology Research Group Evaluation of moderation and mediation in thedevelopment of personalised therapies (stratified medicine) SESSION 2
47.
Aims of Session 2 Methodology Research Group• Recap main ideas from Session 1.• Develop these ideas to – verify correct and incorrect approaches to assessing treatment effect moderation (stratification).• Develop these ideas to – suggest trial designs and analyses that use moderator (predictive marker) by treatment interactions as instruments for mediation investigations.
48.
Recap: treatment effects andtreatment-effect moderation Methodology Research Group• Potential outcomes & treatment effects• Average treatment effects• Treatment-effect heterogeneity (moderation)• Naïve searches for stratifying factors (moderators)
49.
Treatment effects Methodology Research Group• Treatment effects do not make sense (are not defined) without comparison.• We are comparing the outcome we see after therapy with the outcome we might have seen had the individual not received therapy, or therapy of a different kind to that actually experienced.• We are comparing potential outcomes or counterfactuals.
50.
Potential outcomes Methodology Research Group• Consider just two alternatives for the treatment of depression: therapy (T) or a control condition (C).• We have an outcome (the Beck Depression Inventory score) that could be measured six months after the decision to start therapy (or not).• Let these two potential outcomes be BDI(T) and BDI(C) for the therapy and control conditions, respectively.
51.
Comparison of potential outcomes Methodology Research Group• The treatment effect for any given individual is the difference BDI(T)-BDI(C) which we would expect to be a negative number if the treatment is beneficial.• Unfortunately, we never get to see both potential outcomes so we can never observe this individual‟s treatment effect.
52.
So-called treatment-response isnot a measure of an effect of therapy Methodology Research Group• Let‟s now introduced a measure of depression BDI(0) that is obtained at the time of the start of therapy.• The change over time under therapy – i.e. – BDI(T) – BDI(0) is not the same as BDI(T) –BDI(C).• BDI(0) is NOT BDI(C)!
53.
Randomisation and AverageTreatment Effects Methodology Research Group• We get round our problem by working with averages: Average Treatment Effect = ATE = Ave[(BDI(T) – BDI(C)] = Ave[BDI(T)] – Ave[BDI(C)]• If we have random allocation to treatment, R=T or C, then• ATE = Ave[BDI|R=T] – Ave[BDI|R=C]
54.
Treatment-effectheterogeneity Methodology Research Group• The treatment effect BDI(T)-BDI(C) is highly likely to vary from one individual to another.• We would like to know what background information moderates (or predicts) the individual‟s treatment effect. This is the essence of stratification.• Let‟s say we have a genotypic marker (G=0,1). We‟d like to look at association between G and BDI(T)- BDI(C).
55.
Again, we look at averages Methodology Research Group• We are concerned with the evaluation of the comparison of ATE|G=0 with ATE|G=1• This can be done by estimating and/or testing a treatment by genotype interaction in a suitably- powered RCT. – (e.g.see the GENPOD trial: Lewis et al. BJPsych, Vol 198, pp 464-471, 2011).
56.
This is not rocket science ….but what do geneticists usually do? Methodology Research Group• Investigators have a cohort of treated individuals.• They have a measure of treatment outcome, say, BDI(T), or treatment response, BDI(T)-BDI(0), on all individuals within the cohort. Often, they label people as „responders‟ or „non-responders‟.• They investigate associations between treatment outcome and genotypic markers (G).
57.
A treatment outcome is nota treatment effect Methodology Research Group• BDI(T) is not BDI(T)-BDI(C)!• Let the treatment effect be Δ.• Then treatment outcome, BDI(T), is equal to BDI(C) + Δ (to note the obvious!).
58.
Confounding of treatment-effects with prognosis Methodology Research Group• The genotype (G) may be associated with both the treatment effect (Δ) and with treatment-free outcome, BDI(C), i.e. prognosis.• Associating G with treatment outcome, BDI(T), cannot distinguish between the two.• Most importantly, it may be possible for treatment outcome to be associated with G even when there is no effect of treatment for anyone in the treated cohort!
59.
… and evaluating the so-calledtreatment-response doesn‟t help! Methodology Research Group• Δ = BDI(T)-BDI(C)• Treatment response = BDT(T) – BDI(0) = Δ + BDI(C) – BDI(0)• Still confounded! – At best, these investigations are identifying candidates for further (more rigorous) investigation. – At worst, they are uncovering artefacts.
60.
Our approach to stratified medicine(personalised therapy) Methodology Research Group• Predicting outcome after treatment (responders vs. non-responders) is barely scratching the surface of stratified medicine.• Understanding the mechanism underlying the stratification is the key scientific question, and the methodological challenge.
61.
Our “manifesto” Methodology Research Group• Personalised (stratified) medicine and treatment-effect mechanisms evaluation are inextricably linked and stratification without a corresponding mechanisms evaluation lacks credibility;• In the almost certain presence of mediator-outcome confounding, mechanisms evaluation is dependent on stratification for its validity;• Both stratification and treatment-effect mediation can be evaluated using a marker stratified trial design together with detailed baseline measurement of all known prognostic markers and other prognostic covariates;
62.
Our methodological solution Methodology Research Group• Direct and indirect (mediated) effects should be estimated through the use of instrumental variable methods (the instrumental variable being the predictive marker by treatment interaction) together with adjustments for all known prognostic markers (confounders) – the latter adjustments contributing to increased precision (as in a conventional analysis of treatment effects) rather than bias reduction.
63.
A purely prognostic marker Methodology Research Group Randomised Outcome Treatment Prognostic Marker
64.
Prognostic Marker Methodology Research Group TreatedOutcome Untreated Treatment effect Marker Level
65.
A prognostic marker asa confounder Methodology Research Group Randomised Putative Treatment Mediator U Prognostic Clinical Marker Outcome
66.
Instrumental variables Methodology Research Group• If the causal influence of the prognostic marker on the final outcome can be fully explained by its influence on the intermediate, then the marker can be used as an instrumental variable (or instrument, for short).• This is the theoretical rationale in the use of so- called „Mendelian Randomisation‟.
67.
An instrumental variable (IV) Methodology Research Group Random TreatmentAllocation (IV) Received Outcome U
68.
A prognostic marker as aninstrumental variable Methodology Research Group Randomised Putative Treatment Mediator U Prognostic Clinical Marker No direct link to outcome Outcome
69.
Predictive markers Methodology Research Group• Although they may have direct predictive effects on both intermediate and final outcomes, their essential characteristic is that they moderate (influence) treatment effects.• If the treatment-effect moderation on final outcome is wholly explained by the moderation of the effect of treatment on the intermediate outcome, then the latter (i.e. a treatment by marker interaction) can be used as an instrument.• A more subtle (and more realistic?) version of Mendelian Randomisation.
70.
Predictive marker(may also be prognostic) Methodology Research Group Randomised Outcome Treatment Moderating effect Predictive Marker (moderator)
71.
Predictive Marker Methodology Research Group TreatedOutcome Untreated Treatment effect depends on marker Marker Level
72.
Putting it all together: potentialjoint roles of predictive andprognostic markers Methodology Research Group Intermediate Outcome U (Mediator) Predictive Marker B (moderator) Final A (Clinical) Randomised Outcome C Treatment Prognostic Marker U – unmeasured confounders (risk factor)
73.
Potential roles of prognostic markers:measured confounderor instrumental variable Methodology Research Group Intermediate Outcome U (Mediator) B Final A (Clinical) Randomised Outcome C Treatment Prognostic Marker U – unmeasured confounders (risk factor) Dotted line – pathway we might assume are absent Alternatively, we might assume that there are no longer any Us
74.
Option 1 – use prognostic marker(s) as ameasured confounder(s) and then assumethere is no hidden confounding (U) Methodology Research Group Intermediate Outcome (Mediator) B Final A (Clinical) Randomised Outcome C Treatment Prognostic Marker 1 Prognostic (confounder) Marker 2 (confounder)
75.
Option 2 – Use as prognostic marker asan instrumental variable(Mendelian Randomisation) Methodology Research Group Intermediate Outcome U (Mediator) B A Final (Clinical) Randomised Outcome C Treatment Prognostic Marker U – unmeasured confounders (instrument) Using the prognostic marker as an instrumental variable
76.
Potential problems with Mendelian Randomisation Methodology Research Group• Assumption that there is no direct effect of the genetic marker on final outcome frequently difficult to justify, and practically impossible to verify. – Dependent on prior knowledge.• The marker is likely to be a rather weak instrument (i.e. it‟s influence on the intermediate outcome is not strong enough). – This can lead to problems (see Session 1.)• Probably wiser to use available prognostic markers as observed confounders.
77.
Potential role ofpredictive markers Methodology Research Group Intermediate Outcome U (Mediator) Predictive Marker B (moderator) Final A (Clinical) Randomised Outcome C Treatment U – unmeasured confoundersRed dotted lines – pathways we might be justified in assuming are absent
78.
Stratification & mediationalmechanisms evaluation Methodology Research Group Intermediate Outcome U (Mediator) Predictive Marker B (moderator) Final A (Clinical) Randomised Outcome C Treatment U – unmeasured confounders Using the treatment by marker interaction as an instrumental variable
79.
Is the treatment by predictive marker interaction a valid instrument? Methodology Research Group• Are we correct in assuming that there is no moderating effect on pathway B?• Are we correct in assuming that there is no moderating effect on pathway C?• Dependent on prior knowledge of the biology/biochemistry of the system.
80.
Theory-driven stratification Methodology Research Group• Prior scientific theory and preliminary evidence strongly suggests that a given predictive marker has its influence through a specific mechanism (the putative mediator).• No reason to expect that the moderating effect of the predictive marker works via a pathway not associated with the above mechanism (i.e. we assume that the treatment by marker interaction – moderation – is a valid instrument).
81.
Using strong theory and all available prognostic marker information Methodology Research Group Intermediate Outcome U (Mediator) Predictive Marker B (moderator) Final A (Clinical) Randomised Outcome C Treatment Prognostic Marker(s)as Confounder(s) U – unmeasured confounders Using the treatment by marker interaction as an instrumental variable
82.
Complicated but Viable!! Methodology Research Group• Statistical methods widely available to estimate the pathways of this model (we won‟t worry about the technical details).• Health Warning!!• This model is pretty complex and is dependent on a lot of assumptions. Are these assumptions – i.e. the theory - defensible? Invalid assumptions lead to invalid solutions.
83.
Real examples – We don‟t have any! Methodology Research Group• We know of no existing examples of the use of this design – we are presently writing it up for publication.• Examples from our mental health trials involve retrospective analyses of archived data.• Four funded EME trials are under way: – Ketamine ECT in depression (Ian Anderson et al.); – Minocycline and negative symptoms (Bill Deakin et al.); – Worry Intervention Trial (Freeman et al.); – DBT for depression (Lynch et al.); – but none fully utilise biomarker information as described here.
84.
A computer-simulated example Methodology Research Group• Trial with 1000 participants – (500 treated, 500 controls). – Quantitative outcome, y.• Binary predictive marker (x10): – Treatment effect on mediator (m) in its absence is 10 units; in its presence 60 units. – Moderating effect of x10 on outcome solely through the mediator (x10 known to be an IV). – Variants of x10 equally probable (50:50).• Nine prognostic uncorrelated binary markers x1-x9. – All nine are confounders. – Details of their creation of no consequence, here.
85.
The true model (mediator) Methodology Research Group Mediator (m): m=5*x1+5*x2+5*x3+5*x4+5*x5+5*x6+5 *x7+5*x8+5*x9+5*x10+10*treat+50*x 11+e12 Where x11 = treat*x10 (i.e. The treatment by marker interaction) e12 is a random „error‟ term “ * ” is a multiplication sign.
86.
The true models (outcome) Methodology Research GroupOutcome (y):y=5*x1+5*x2+5*x3+5*x4+5*x5+5*x6+5 *x7+5*x8+5*x9+5*x10+2*m+10*treat +e13e13 is a random „error‟ term (uncorrelated with e12).There is no x11 (interaction) in this model.THERE ARE NO UNMEASURED COMMON CAUSES(i.e. x1-x9, and x10, are all measured)
87.
Simple summaries Methodology Research Group----------------------------------------------------------------------------> treat = 0 Variable | Obs Mean Std. Dev. Min Max ------------+------------------------------------------------------------- m | 500 74.83 7.58 55.22 97.78 y | 500 174.47 22.27 116.21 247.78---------------------------------------------------------------------------> treat = 1 Variable | Obs Mean Std. Dev. Min Max ------------+------------------------------------------------------------- m | 500 108.92 28.42 55.66 159.70 y | 500 252.91 61.10 124.59 372.49Note lack of homogeneity of standard deviations across the groups.TREATMENT GROUP MUCH MORE VARIABLE (AS WE MIGHT EXPECT).
88.
Naïve analysis methods Methodology Research Group• I won‟t bother to describe these in detail (but see below).• In the psychological and social science literature they will be dominated by approaches similar to those advocated by Baron & Kenny (about 17000 citations!)• At the more hi-tech end of medicine they‟ve rarely got round to using the naive methods!
89.
Let‟s pretend we‟ve notmeasured x1-x9: Methodology Research Groupi.e. there are indeed „unmeasured‟common causesAn instrumental variable regression in Stata:ivregress 2sls y treat x10 (m = x11), firstThis is a two-stage least-squares procedure whichsimultaneously estimates the effect of treatment onm (the first-stage regression), the effect of m on y,and direct effect of treatment on y (the secondstage).
90.
The first-stage regressions Methodology Research Group------------------------------------ m | Coef. Std. Err.---------+-------------------------- treat | 10.07 0.63 x11 | 50.47 0.90-------------------------------------
91.
The second-stageregressions Methodology Research Group------------------------------------- y | Coef. Std. Err.---------+--------------------------- m | 2.00 0.02 treat | 10.39 0.87-------------------------------------
92.
Naïve methods:the 2nd-stage regression Methodology Research GroupUse ordinary least-squares to regress y on x10,m and treatregress y m x10 treat------------------------------------------ y | Coef. Std. Err-------------+---------------------------- m | 2.19 0.02 treat | 3.67 0.75------------------------------------------DIRECT EFFECT OF TREATMENT SEVERERLY BIASED.
93.
Now use all available data Methodology Research Groupivregress 2sls y treat x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 (m = x11), first1st stage: Coef. Std. Err. treat | 9.77 0.26 x11 | 50.73 0.372nd stage: m | 2.01 0.01 treat | 10.01 0.55CONSIDERABLE GAIN IN PRECISIONMeasurement of prognostic markers not essential, but itmakes the design more efficient (i.e. get away with asmaller trial) – perhaps the difference between a viabletrial and one that‟s just not feasible.
94.
„Naïve‟ 2nd-stage regression using all data Methodology Research Groupregress y x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 m treat------------------------------------- y | Coef. Std. Err.-------------+----------------------- m | 2.00 0.01 treat | 10.05 0.54If (but only if) we‟ve measured all confounders then this isvalid and it is the most precise method. But ... we never know!Returning to IV: there‟s a balance between bias and precision.We don‟t get something for nothing.
95.
The Key Ingredients Methodology Research Group• Convincing psychological theory concerning the potential mechanism for mediation.• Convincing theory to underline the belief that the treatment by moderator (predictive marker) interaction is a valid instrument.• An appropriately powered trial for – Valid evaluation of treatment-effect moderation – on the mediator as well as on the outcome. – Valid use of instrumental variables estimation to evaluate the treatment-effect mechanisms (mediation).
96.
Design considerations Methodology Research Group• How big does the trial have to be? Considerably larger than a conventional pragmatic trial.• How strong does the moderating effect on the mediator have to be? – Our simulated example used a very strong moderating effect. – However, presumably it has to be reasonably strong to be of any serious interest.• What does the prevalence of the alleles for the predictive biomarker have to be? – We used 50:50 (maximum power). – More likely to be of the order 90:10.
97.
Conclusions Methodology Research Group• The scientific evaluation of stratified/personalised medicines/therapies is inseparable from mechanisms evaluation.• So far, progress in trial design for mechanisms evaluation appears to have been very limited. – Interestingly, much more progress for the „softer‟ treatments (psychotherapies) than for hi-tech medicines.• Good design involves using prior scientific knowledge/evidence and makes full use of data from both prognostic and predictive markers.• The required statistical methods are available and reasonably straight forward to use.
Clipping is a handy way to collect and organize the most important slides from a presentation. You can keep your great finds in clipboards organized around topics.
Be the first to comment