Cer symposium 2012_ioannidis

316 views

Published on

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
316
On SlideShare
0
From Embeds
0
Number of Embeds
3
Actions
Shares
0
Downloads
3
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Cer symposium 2012_ioannidis

  1. 1. Comparative effectiveness research: understanding and designing the geometry of the research agenda John P.A. Ioannidis, MD, DSc C.F. Rehnborg Chair in Disease Prevention Professor of Medicine and Professor of Health Research and Policy Director, Stanford Prevention Research Center Stanford University School of Medicine Professor of Statistics (by courtesy) Stanford University School of Humanities and Sciences
  2. 2. I want to make big money• My company, MMM (Make More Money, Inc.), has successfully developed a new drug that is probably a big loser• At best, it may be modestly effective for one or two diseases/indications for one among many outcomes• If I test it in RCTs, even for this one or two indications, it may seem not to be worth it• But still I want to make big money• Please tell me: What should I do?
  3. 3. The answer• Run many trials (this is the gold standard of research) with many outcomes on each of many different indications• Ideally against placebo (this is the gold standard for regulatory agencies) or straw man comparators• Test 10 indications and 10 outcomes for each, just by chance you get 5 indications with statistically significant beneficial results• A bit of selective outcome and analysis will help present “positive” results for 7-8, maybe even for all 10 indications• There are meta-analysts out there who work for free and who will perform a meta-analysis based on the published data SEPARATELY for each indication proving the drug works for all 10 indications• We can depend also on electronic databases to give us more evidence on additional collateral indications. People working on them also do it for free, this research is tremendously underfunded.• We love that all these people work for us and don’t even know it!• With $ 1 billion market share per approved indication, we can make $ 10 billion a year out of an (almost) totally useless drug
  4. 4. We probably all agree• It is stupid to depend on the evidence of a single study• when there are many studies and a meta- analysis thereof on the same treatment comparison and same indication
  5. 5. Similarly• It is stupid to depend on a single meta-analysis• when there are many outcomes• when there are many indications the same treatment comparison has been applied to• when there are many other treatments and comparisons that have been considered for each of these indications in randomized and non- randomized evidence
  6. 6. Networks and their geometry• Networks can be defined as diverse pieces of data that pertain to research questions that belong to a wider agenda• Information on one research question may indirectly affect also evidence on and inferences from other research questions• In the typical application, data come from trials on different comparisons of different interventions, where many interventions are available to compare
  7. 7. Network geometry offers the big picture: e.g. Size of each node proportional to themaking sense of 700 trials of advanced breastFigure 2a amount of information (sample size) cancer treatment AT SD Tc AN SD T+tzmb A+tzmb SD Ts+lpnb Os A c SD Ts A s SD NT ANT SD Ns A s LD Oc N+lpnbM c SD Nc N+bmab M c LD A c LD M s SD Mauri et al, JNCI 2008
  8. 8. Main types of network geometry Polygons Stars Lines Complex figures Salanti, Higgins, Ades, Ioannidis, Stat Methods Med Res 2008
  9. 9. Diversity and co-occurrence• Diversity = how many treatments are available and have they been equally well studied• Co-occurrence = is there preference or avoidance for comparisons between specific treatments Salanti, Kavvoura, Ioannidis, Ann Intern Med 2008
  10. 10. Diversity and co-occurrence can be easily measured and statistically tested
  11. 11. Homophily• OΜOΦΙΛΙΑ = Greek for “love of the same” = birds of a feather flock together• Testing for homophily examines whether agents in the same class are disproportionately more likely to be compared against each other than with agents of other classes.
  12. 12. For example: Antifungal agents agenda• Old classes: polyenes, old azoles• New classes: echinocandins, newer azoles
  13. 13. Rizos et al, 2010
  14. 14. • Among polyene and azole groups, agents were compared within the same class more often than they did across classes (homophily test p<0.001 for all trials).• Lipid forms of amphotericin B were compared almost entirely against conventional amphotericin formulations (n=18 trials), with only 4 comparisons against azoles.
  15. 15. Figure 2 posaconazole 1 3 lipid amphotericin B fluconazole 1 1 11 1 18 3 17 1 4 amphotericin B 2itraconazole 2 2 voriconazole ketoconazole
  16. 16. • There was strong evidence of avoidance of head-to-head comparisons for newer agents. Only one among 14 trials for echinocandins has compared head-to-head two different echinocandins (p<0.001 for co-occurrence). Of 11 trials on newer azoles, only one compared a newer azole with an echinocandin (p<0.001 for co-occurrence).
  17. 17. Figure 3 anidulafungin 2other caspofungin 8 3 1 micafungin
  18. 18. e4 12 other echinocandins 10 1 voriconazole or posaconazole
  19. 19. Auto-loopingDesign of clinical research: an open world or isolated city-states (company-states)? Lathyris et al., Eur J Clin Invest, 2010
  20. 20. Reversing the paradigmDesign networks prospectively – Data are incorporated prospectively – Geometry of the research agenda is pre- designed – Next study is designed based on enhancing, improving geometry of the network, and maximizing the informativity given the network
  21. 21. This may be happening already?Agenda-wide meta-analyses BMJ 2010
  22. 22. Anti-TNF agents: $ 10 billion and 43 meta-analyses,all showing significant efficacy for single indications 5 FDA-approved anti-TNF agents Infliximab Etanercept Adalimumab 1998 Golimumab Certolizumab pegolIndications Psoriatic 2003 Psoriasis arthritis 1998 Juvenile RA Ankylosing idiopathic spondylitis arthritis Crohn’s Ulcerative disease colitis
  23. 23. 1200 (and counting) clinical trials of bevacizumab
  24. 24. Fifty years of research with 2,000 trials: 9 of the 14 largest RCTs on systemic steroidsclaim statistically significant mortality benefits Contopoulos-Ioannidis and Ioannidis EJCI 2011
  25. 25. How about non-randomized evidence?• Epidemiology• Cohort studies• Electronic record databases• Registries• Propensity adjusted effects• Biobanks• Patient-centered outcomes research• You name it
  26. 26. Comparisons between randomized and non-randomized evidence Ioannidis J. et al. JAMA 2001;286:821-830. 7 pairs discrepancies beyond chance
  27. 27. Inflated effects for cardiovascularbiomarkers in observational datasets Tzoulaki, Siontis, Ioannidis, BMJ 2011
  28. 28. Inflation in statistically significant treatmenteffects of meta-analyses of randomized trials Ioannidis, Epidemiology 2008
  29. 29. Some systemic changes• Public upfront availability of protocols• Public eventual availability of raw data• Public upfront availability of research agendas• Reproducible research movement
  30. 30. Science 2011
  31. 31. So, what the next study should do?• Maximize diversity• Address comparisons that have not been addressed• Minimize co-occurrence• Break (unwarranted) homophily• Be powered to find an effect or narrow the credible or predictive interval for a specific comparison of interest• Maximize informativity across the network (entropy concept)• Some/all of the above
  32. 32. BottomlineWe need to think about how to design prospectively large agendas of clinical studies and their respective networksThis requires a paradigm shift about the nature and practice of comparative effectiveness research, compared with current standards

×