Successfully reported this slideshow.

Agile Science

1

Share

1 of 70
1 of 70

More Related Content

More from Designing Health Lab, Arizona State University

Related Books

Free with a 14 day trial from Scribd

See all

Agile Science

  1. 1. @ehekler, ehekler@gmail.com www.agilescience.org keynote @ #ISRII2017 11 Eric Hekler, PhD Associate Professor, Arizona State University Associate Professor, University of California, San Diego (Dec 2017 onward)
  2. 2. Thank you! 2 @ehekler • Predrag (Pedja) Klasnja, John Harlow, Elizabeth Korinek, Sayali Phatak, Bill Riley, Daniel Rivera, Mathew Buman, Kevin Patrick, Bob Evans, Cesar Martin, Jennifer Huberty, Marios Hadijamichael • Linda Collins & MOST • The Robert Wood Johnson Foundation • DISCLAIMER: I am a scientific advisor to: eEcoSphere, Proof Pilot, & Omada Health, HopeLab, Sage Bionetworks
  3. 3. 3 @ehekler People are different. Context matters. Things change.
  4. 4. Summary 4@ehekler • Goal: – Knowledge accumulation to support behavior change • Problems – Evidence created vs. needed – Complex causal problem vs. simple causal philosophy • “Building blocks” – Modules – Computational models – Decision policies – Tools for personalization • Activities in the process – Creating – Optimizing – Repurposing – Curating
  5. 5. Crisis of methods 5
  6. 6. Crisis of methods 6 “Most scientists are not trained today on the basics of epistemology or logic… We need to go back to work on the basics.” -Dr. Arturo Casadevall Johns Hopkins Bloomberg School of Public Health
  7. 7. @ehekler 77 Supply & demand problem Evidence Demanded Evidence Supplied Yes No Yes Success Problem No Need Success McNie, Parris, Sarewitz, 2016, Research Policy
  8. 8. Patients: What do I do now? 8 https://pixabay.com/p-690128/?no_redirect
  9. 9. Practitioners: What do I do now? 9
  10. 10. Policy-Makers: What do we do now? Harlow, Hekler, Johnston, Yeh, under review@ehekler
  11. 11. What should I (or my client) do now in this context to produce the desired outcome(s)? @ehekler 11
  12. 12. 12 https://www.guideline.gov/summaries/summary/39432/diagnosis-and-treatment-of-depression-in-adults-2012-clinical-practice-guideline
  13. 13. Usable evidence 13 @ehekler “From evidence-based decision-making to decision-based evidence-making.” Margaret Laws, HopeLab
  14. 14. (Plausibly) meaningful variability @ehekler 14 Universal Sub- Group Idiosyncratic Context- bound
  15. 15. User’s needed evidence (based on the question) @ehekler 15 Sub-Group IdiosyncraticContext-bound Universal Sub- Group Evidence generated (based on study designs used) Context- Bound* Idiosyncratic* Universal *Largely considered “noise”
  16. 16. What is causality? How do we infer it? • Cause preceded effect • Cause related to effect • No alternative explanations
  17. 17. Can a cause/effect occur without a human? https://commons.wikimedia.org/wiki/File%3AIf_a_tree_falls_in_the_forest.jpg
  18. 18. Can a cause/effect occur one time only?
  19. 19. Can a cause/effect occur only sometimes?
  20. 20. INUS condition [Preconditions]: Insufficient but Necessary parts of a condition which itself is [Mechanism of action] Unnecessary but Sufficient Mackie, J. L., 1965. “Causes and Conditions”, American Philosophical Quarterly, 12: 245–65.
  21. 21. Pre-conditions • When, where, for whom, and in what state will a given intervention produce the desired outcome. 21 @ehekler Hekler et al. 2016, AJPM
  22. 22. Fundamental mechanism of action 22 @ehekler
  23. 23. @ehekler 2323www.agilescience.org
  24. 24. Efficiency Guiding principles 24 @ehekler Continuous optimization Usability Triangulation
  25. 25. Modules Decision policies Computational models Tools of personalization Agile Science Tools 25 @ehekler
  26. 26. Complex interventions vs. modules From perfect “packages” Flickr - Paul Swansen= To repurposable pieces Flickr - Benjamin Esham @ehekler www.agilescience.org
  27. 27. Complex interventions @ehekler
  28. 28. Modules @ehekler Inputs Process Output
  29. 29. Modularizing health interventions @ehekler Pain Reduction Tool Goal-setting Tool Walking Reminder Tool Social Support Tool Glucose Monitoring Tool Insulin Dosage Tool
  30. 30. Proximal outcomes of the module Shortest timescale for measuring a meaningful effect @ehekler Walk within 30min of prompt Prompt to Walk Steps/ Day National Guidelines (PA/WK) Cardiovascular Fitness (vO2) CVD Proximal Outcomes (often skipped/ignored) www.agilescience.org
  31. 31. Computational models Linking interventions, individuals, context, & outcomes Riley, Martin, Rivera, Hekler, et al. 2016; Martin, Riley, Rivera, Hekler, et al. 2014@ehekler
  32. 32. Decision policies Matchmaking interventions with individual & contextual differences www.netflix.com@ehekler
  33. 33. Martin, Rivera, & Hekler Am. Control Conference (2015) Decision policies @ehekler
  34. 34. Tools of personalization “Learning” adjustments when previous evidence does not match @ehekler Eng. (Adaptive Control) CS (e.g., reinforcement learning)
  35. 35. Tools of personalization “Learning” adjustments when previous evidence does not match @ehekler Measure success towards goal Results Self-experimentation Goal + Plan Implement for 1 week
  36. 36. User’s needed evidence @ehekler 36 Sub-Group IdiosyncraticContext-bound UniversalWhat should I (or my client) do now in this context to produce the desired outcome(s)?
  37. 37. Using evolution as a model for the scientific process. Agile Science process 37 @ehekler www.agilescience.org
  38. 38. Variability generation Natural selection Niche expansion @ehekler Modeling evolution = Create = Optimize = Repurpose
  39. 39. @ehekler
  40. 40. Create Design specific solutions for specific problems @ehekler
  41. 41. 41
  42. 42. 42
  43. 43. 43
  44. 44. Training guide available now! 44 www.agilescience.org/resources.html
  45. 45. Optimize Engineer until success (i.e. optimization criteria) are met. @ehekler
  46. 46. Linda M. Collins The Methodology Center Penn State methodology.psu.edu@ehekler
  47. 47. Optimization trials • Screening experiment • SMART • Micro-randomization trial • Control systems optimization trial 47
  48. 48. Daily step goal & rewards Hekler (PI), Rivera (Co-PI), NSF IIS-1449751 -15 -10 -5 0 5 10 15 20 0 2000 4000 6000 8000 10000 12000 14000 1000 3000 5000 7000 9000 11000 13000 15000 AveChangeSelf Effficacy ActualDailySteps Recommended Goal Actual Steps Δ Self-Efficacy @ehekler 48
  49. 49. Optimization criteria 49@ehekler Hekler et al. under review • Initiation “Set-point” – 10,000 steps/day, on average per week for 22 out of 26 weeks OR – +3,000 steps/day, on average per week relative to baseline for 22 out of 26 weeks • Maintenance set-point – Same steps set-point – 0 interactions with participant, except use of wearable device
  50. 50. Martin, Rivera, & Hekler Am. Control Conference (2015; 2016) Model-predictive controller @ehekler 50
  51. 51. Control engineering optimization trial Open Loop Closed loop> Maintenance 51@ehekler
  52. 52. Repurpose Determine [generalize] who, when, and where else might the tool be useful. @ehekler
  53. 53. INUS condition [Preconditions]: Insufficient but Necessary parts of a condition which Itself [Mechanism of action] is Unnecessary but Sufficient Mackie, J. L., 1965. “Causes and Conditions”, American Philosophical Quarterly, 12: 245–65.
  54. 54. Modularizing 1) Cutting out pre-conditions • When, where, for whom, and in what state will a given intervention produce the desired outcome. 54 @ehekler Hekler et al. 2016, AJPM
  55. 55. 55 @ehekler Modularizing 2) Distill mechanism of action from variations
  56. 56. 56 Modularizing 2) Distill mechanism of action from variations
  57. 57. Science of matching/generalization • Does it remain true across variations among other people, places, times, treatments? • Is it predictive of the future for that same person/unit of study? Shadish, Cook, & Campbell, 2002
  58. 58. Science of matching 58 Intervention Constructs & Operations Optimization Criteria Niche definition
  59. 59. Complexity map 59
  60. 60. Science of matching 60 Meaningful variations In hormone replacement therapy Meaningful Definitions of Success Meaningful clusters of people, place, times (i.e., niches)
  61. 61. Pragmatic clinical trials? 61 • Implementation science • Scaling up and scaling out • Connection? – ACTS? – Others? – LOVE TO HEAR YOUR THOUGHTS!
  62. 62. Curate Organize information to make it accessible for decision-making. @ehekler
  63. 63. The Human Behaviour-Change Project A Collaborative Award funded by the Participating organisations @HBCProject www.humanbehaviourchange.org Adapted from Susan Michie/slides: http://www.ucl.ac.uk/human-behaviour-change
  64. 64. Human Behavior-Change Project computer science information science behavioural science Ontology of behaviour change interventions How can we organise the evidence? Extracting and interpreting the evidence What does the evidence show? Making the evidence accessible at scale in real time How can we make the evidence usable? Adapted from Susan Michie/slides: http://www.ucl.ac.uk/human-behaviour-change
  65. 65. Extracting 65 Image courtesy of Kai Larsen
  66. 66. Organizing 66 Larsen, Michie, Hekler et al. 2017
  67. 67. Using • “The big question” What works, compared with what, how well, with what degree of exposure, for whom, in what settings with what behaviours, and why? 67Adapted from slides from Robert West; http://www.ucl.ac.uk/human-behaviour-change
  68. 68. Summary 68@ehekler • Goal: – Knowledge accumulation to support behavior change • Problems – Evidence created vs. needed – Complex causal problem vs. simple causal philosophy • “Building blocks”: – Modules – Computational models – Decision policies – Tools for personalization • Activities in the process: – Creating – Optimizing – Repurposing – Curating
  69. 69. Open questions 69@ehekler • What does science look like when people are different, context matters, and things change? • What about citizen-led science? • What does a 21st cent. scientist do? – Science of matching – Empower citizens/practitioners • How might funding look different?
  70. 70. @ehekler, ehekler@gmail.com 7070www.agilescience.org Eric Hekler, PhD Associate Professor, Arizona State University Associate Professor, University of California, San Diego (Dec 2017 onward)

Editor's Notes

  • Any time a person asks, “what do I do now?” and they don’t have good evidence to help them make a decision, I think of that as science not supplying the information that’s being demanded. Sadly, I think, it’s also true that scientists are creating evidence that nobody really wants.

    Why does this supply and demand problem exist?
  • Professionals still focus on “on average” science (even, it appears, with many precision medicine efforts)
    Professionals need to move towards studying the utility of personalization algorithms

    Creators, users, and participants of resources into different stages of the process. All of it is driven by a decision though.
  • Professionals still focus on “on average” science (even, it appears, with many precision medicine efforts)
    Professionals need to move towards studying the utility of personalization algorithms

    Creators, users, and participants of resources into different stages of the process. All of it is driven by a decision though.
  • How does this help with supply and demand? Well, modularizing the evidence makes it so that any time someone’s looking for tools to, for example, help someone walk, the module tools can be used as part of there intervention. This is like the difference between Lego pieces vs. the things created from Legos, like this Volve car; intervention modules are much more likely to be repurposable, precisely because they are small and scoped.

    Then, all you need are instructions, such as models and algorithsm to figure out how to package them together for potentially novel uses, which becomes possible when the evidence is more about the modules.
  • How does this help with supply and demand? Well, modularizing the evidence makes it so that any time someone’s looking for tools to, for example, help someone walk, the module tools can be used as part of there intervention. This is like the difference between Lego pieces vs. the things created from Legos, like this Volve car; intervention modules are much more likely to be repurposable, precisely because they are small and scoped.

    Then, all you need are instructions, such as models and algorithsm to figure out how to package them together for potentially novel uses, which becomes possible when the evidence is more about the modules.
  • Our first secret weapon. Modules!
    Modules and APIs are the backbone of the digital economy. They all have the basic form of Inputs Process and Output. For example, when you use Google Maps, you put in addresses of where you are and where you want to go, Google does it’s magic process, and out comes directions.

    The cool thing is that Google Maps was built to do that scoped task very well, but not necessarily anything else. This scoping of its purpose makes it so that Google Maps can be used across the internet to help Yelp, companies, universities, and others find their way.
  • We think there’s great opportunity to think much more deeply about how to modularize health interventions, including behavioral interventions and disease management strategies but even more advanced tools like the components of medical devices.

    To do this though, requires a different type of science that is built around modules and their use in the real-world, not multicomponent complex interventions.

    How does it work?
  • Based on this, we need to move more into an open discussion in which we explore lots and lots of different ideas if we really want to understand which ones are best.
    Sadly, science, particularly behavioral science doesn’t really have the sort of “maker” culture that would allow us. As such, a key emphasis.
  • Based on this, we need to move more into an open discussion in which we explore lots and lots of different ideas if we really want to understand which ones are best.
    Sadly, science, particularly behavioral science doesn’t really have the sort of “maker” culture that would allow us. As such, a key emphasis.
  • Based on this, we need to move more into an open discussion in which we explore lots and lots of different ideas if we really want to understand which ones are best.
    Sadly, science, particularly behavioral science doesn’t really have the sort of “maker” culture that would allow us. As such, a key emphasis.
  • Based on this, we need to move more into an open discussion in which we explore lots and lots of different ideas if we really want to understand which ones are best.
    Sadly, science, particularly behavioral science doesn’t really have the sort of “maker” culture that would allow us. As such, a key emphasis.
  • Professionals still focus on “on average” science (even, it appears, with many precision medicine efforts)
    Professionals need to move towards studying the utility of personalization algorithms

    Creators, users, and participants of resources into different stages of the process. All of it is driven by a decision though.
  • My colleagues and I have been developing process to counteract the supply and demand problem and the resource transfer problem and, by extension, speed the pace of health sciences.

    We call it Agile Science.

    How does it work? Well, we’ve got three secret weapons: modules, modeling evolution as a process, and fostering co-creation and early-and-often sharing across a community.
  • Getting this to generalize and expand it’s reach occurs through careful curation, which, based on evolution analogy, can be considered niche expansion Just like how Facebook was first tested in Harvard, then Stanford, and what not and slowly grew out, we picture a delicate back and forth between evaluating and curating to see how broad a given “niche” is foreach modular tool and packaging of tools are to enable more rapid an thoughtful uses of tools created.

    Overall, we see this process as giving us the flexibility and the resources we feasibly need for fostering a healthy ecosystem, which is essential for fostering a culture of health.
  • I’ve been calling this alternative process agile science, which I’ll jump into briefly here.
  • Thankfully, there has been great movement away from that classic pipeline and particularly the use of a randomized trial of interventions with multiple components in it, to other strategies that are more mirrored on strategies from engineering. Central to this work is a careful understanding of how to develop the evidence around the components of the intervention, with the assumption being htat the components will be more repurposable. SO, for example, Linda Collins has been pioneering the use of fractional factorial study designs to run interventions with multiple components but with a methodology that supports understanding of how the components and how they interact might function.
  • Blackbox modeling is the first step in the system id analyses. We used goals, points, and some of our self-report measures as inputs to predict daily steps in this procedure.

    The primary interest here is to fit the data regardless of a particular structure of the model. So this is not considering the SCT model structure when conducting the analyses
    Typically a trial and error process where you estimate the parameters of various structures and compare results
    Minimal knowledge of the structure is used – so used an autoregressive model structure. (consistent estimation with probability of 1)

    What we have been currently doing as part of the blackbox modeling is finding the best fitting model for all participants...as I mentioned earlier, there are various ways to go about this and we carried out an exhaustive search looking over every possible ARX structure (output and input lags), and this is a trial and error process...
    Used all combinations of cycles for estimation and validation, and then obtained

    We have been trying to find ties to the statistical methods we use in the social sciences for this process..such as checking assumptions. To try and bring structure into interpreting and choosing the best models for each participants in a way that they are also reliable. This is our first pass at this...

    In choosing these models, we have looked at the best average validation fits (using roughly 50-50 estimation/validation), and cross-correlations between the inputs.
    We looked at cross-correlations amongst the inputs to try and use only orthogonal signals. So we have removed those signals that were highly correlated to choose the most parsimpnious models.
    We also tried to maintain inter-rater reliability by having two different individuals go over the model-choosing process.

    We will be able to properly validate these models only when they enter a controller/ when we do the semi-physical modeling which uses the SCT model structure.


    Orthogonal inputs
    Autoregressive
    A portion of the


    Only 1 participant below 10% model fit, suggesting “good enough” model fit for 95% of our sample

    For all combinations of cycles as esti and vali. Sets, you chose the best ARX structure (most predictive) for that combination.
    Model fit per cycle (in the validation set), and then average over that.

  • Based on this, we need to move more into an open discussion in which we explore lots and lots of different ideas if we really want to understand which ones are best.
    Sadly, science, particularly behavioral science doesn’t really have the sort of “maker” culture that would allow us. As such, a key emphasis.
  • ×