Provocation at conference


Published on

1 Like
  • Be the first to comment

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide
  • Thank you very much for having us provide the provocation related to methods. At this point we feel a bit like the small matador facing the mighty bull. We hope to provoke good conversation without leaving us impaled. Our issue to provoke focuses on using/developing new research methods to more rapidly advance the field and exploit the rapid pace of technological development. Which new methodologies need to be developed for testing theories that permit more agile, iterative development and evaluation.
  • The core problem with our methods can ultimately be summarized like this.As Bonnie Spring discussed in her response to this question, “the academic community is slow, careful, and uses sequential testing of health interventions through progressively more challenging experiments and trials. This can be seen by the timeline for our winning achievement, at least in the US, receiving an R01 grant. Here is the timeline for that one study. This timeline is, of course, in stark contrast to industry timelines and goals.Again, as Bonnie aptly pointed out, industry utilizes a “ “fail early” mantra of software startups, which results in intentional launches of imperfect products that are then rapidly improved based on user data from the field.”Or as it was said by Ilkka - Traditional RCT designs which originate from pharmacological treatment trials are not optimal for mHealth and other modern technologiesBased on impact, it seems rather apparent that industry currently has a much larger impact both positive and negative on society than we likely do. What should we do about this though?
  • Because the landscape has truly shifted underneath us. Classically, the role of scientists was to discover new truths, often that were divorced from any practical implication. This knowledge for knowledge sake movement was powerful. We are all seeing and actively involved in the shift of our purpose based on the translational science movement going whereby we need to figure out not just how to discover but also to translate and disseminate work. Further, there appears to even be an underlying paradigm shift to even foster societal change in this process.
  • This is really important to bring up related to methods because the actions that we will need to take to use and create these new methods and theories are not reinforced in our current academic system. Think about it. Let’s say our old values focused on knowledge for knowledge sake, long-time scale highly preplanned research that was largely conducted within disciplinary siloes with an intentional fear of partnering with businesses or thinking at all about the business ramifications to reduce bias. In our new value system for science though, we appear to be moving in quite contradictory directions such as research that has social relevance, small iterative studies being conducted by transdsiplinary teams that are embedded within their community and may even take advantage of parternships with businesses either for thinking of ways to get new data or working through the business model that would allow the evidence-based idea to actually be a self-sustaining business. Such a shift will require us to create new strategies for reinforcing the new methods and strategies we want such as better sharing of data as we had discussed during the last provocation. To push sharing to an extreme possibility, what if we abolished the old model of scientific publications and instead created an open source Github-like Wiki that fosters not just sharing but real time peer review? For those of you that have not used Github, it is a code repository that allows you to keep track of a wide variety of versions of code and keeps good documention of these changes. This could be really advantageous for science if it were put into a hierarchical Wiki-style system that we all contribute the ongoing results from our studies and processes to determine and organize us to work more efficiently together. Think about it, do we want to only have peer review either during the early stages of the process when a grant is submitted and then again after all of the work is done? Wouldn’t it be more cost-effective if “publishing” were as dynamic as the new theories and methods we are trying to develop?
  • Put differently, we can’t panic but we need to organize, particularly by starting first at a clear understanding of the behaviors and actions that are being reinforced through things like the tenure and promotion process and work to optimize our metrics for being a successful academic to allow us to positive reinforce the actions that support the common good of our entire scientific community. Only if we get the metrics of success right for being a good academic that foster the values and goals that we want to achieve will we be able to accomplish our goals. This is particularly true based on a comment from Brigitte who said The issues that plague society today (sedentary behavior and high intake of refined carbohydrates) are not likely to persist indefinitely. Eventually these will be largely taken care of and society will be battling a new demon and our science must be agile enough to provide solutions when they are needed rather than describe what happened in the past. If we keep going like we do, our behavioral theories and our science will likely remain simply a reaction to what industry has done rather than a force for creating a better society.
  • To summarize, here is our broad question. How might we create new methods for reinforcingthe actions of academics to foster the rapid development of new methods and theories?
  • Of course, a provocation about methods should also include a discussion about specific methods. We intentionally saved this point for last though because we believe our first priorities are to figure out: 1) what is our job and how to measure metrics of success with our new possible job description; and we are happy to have gone second after the discussions about how to share as that has important implications for ways to facilitate data sharing. Gaining a better handle on those two issues will then help us to gain a better understanding of the methods we need to use. At present, we propose that the fundamental problem with our methods is that we only use too few “tools” to answer too many of our questions. In particular, we collectively do not have a clear understanding of when it is NOT appropriate to use certain methods but instead use them because they are the “gold standards.” The “gold standard” method of randomized controlled trials are our steadfast hammer. As discussed by Ilkka “Traditional RCT designs which originate from pharmacological treatment trials are not optimal for mHealth and other modern technologies.” RCTs have been our traditional method for asking questions related, “does it work?” Now, this is, of course, a very important question, but there are a variety of other questions that we are also trying to get into that likely requires a different tool. For example, there is increasing discussion about the vital importance of identifying an appropriate control group. Not enough effort is spent understanding the precise question we are exploring and by extension, we do not establish the appropriate control group. In our view though, this ultimately stems back to our slow and methodical nature, which is reinforce by our current grants process. Put differently, an R01 as traditionally reviewed emphasizes one BIG study to answer a question. We are realizing more and more than we’ve got a lot of questions that we want to ask a lot of questions. Rather than design different studies that are optimized for each question, we instead try to get double or even triple duty out of our RCTs. With an RCT we can explore questions related to, “if this intervention package works” better than nothing, better than maybe some other active intervention. We are also now getting into questions related to testing mediation of outcomes through secondary data analyses of the data or exploring who responds best to which interventions via moderation analyses. We are asking a lot from our RCT though. The problem is though that as we keep adding questions to answer from this one study, the more diluted and ineffective the method becomes. This was raised by Pamela Kato who suggested that our current “validation” studies, as he put it, for interventions are often diluted resulting in lots of conclusions about systems “working” but since the study started as a relatively shotty endeavor, it was not properly optimized to answer the question. Put metaphorically, we are trying to build an entire house of behavior change theory using very few tools such as our trusty RCT “hammer” or maybe our “Survey “saw.” Pamela Kato:It would be great if the lack of rigorous validation studies did not impact future research and development of health behavior measurement and motivation system. T. ... Unfortunately, I think that the current lack of rigorous validation studies is already having a negative effect on the field moving forward. We are developing interventions based on weak behavior change models and we are putting out interventions based on those models with very weak or absolutely no evidence showing that they work. … Finally, a lot of people talk about how randomized trials aren’t the end of all the validation research. That is true, but we don’t have enough randomized trials of behavior change applications so that we can rationalize that it is time to turn to other ways of looking at efficacy or validity. We need to clearly answer the question: “Does it work,” then we explore the intricacies of what works.
  • Instead, we as a collective group, need to gain a better understanding of the wide variety of tools at our disposal. These can include techniques such as:Community-based participatory research designs from behavioral science or user experience design principles from design and HCILinda Collin’s Multiphase optimization Strategy, which utilizes fractional factorial studies, orCollin’s SMART trials, which attempt to better understand clinical decision rules through rerandomizationN-of-1 study designs. Daniel Rivera’s focus on using dynamical systems to create control systems, data mining and machine learning techniques to recognize patterns in data, the use of Wiki’s and open source tools for fostering large scale collaboration, as discussed earlier, New initiatives and strategies. For example, how can we do better with determine theoretical fidelity of an intervention component? Can we use alternative user experience design process? With that, the question then becomes what are the tools we need to collectively become good at using to fulfill our possible new goals, live up to our values, and ultimately share our resources?TRY TO INTEGRATE POINTS BELOW? Although I am all in favor of testing new technology because the “me too” app might work better or it might not work at all, I also recognize there are diminishing marginal returns in testing me too applications. They produce less scientific glory; they attract less external sponsorship, and more conflicted external sponsorship; and really they should be tested against other active interventions and, as those get increasingly effective, evaluations require increasingly larger samples to demonstrate smaller and smaller incremental gains, meaning that trials get more expensive even as they produce less benefit. On the other hand untested interventions that are hyped in order to advance their commercial potential are a real social welfare drain.I think there are some rapid cycle innovation approaches that can help here.
  • An important subpoint to this though is recognizing that we are pushing for culture change when we are asking our peers to review and understand these alternative methods. Indeed, sometimes the most stifling aspect of science comes from ourselves. For example, last year Daniel Rivera and I submitted a grant to the Young Innovators Awards from the NIH, which focuses primarily on supporting highly innovative ideas from young investigators that could “transform the field.” I won’t go into details about the grant, but the important thing to note here relates to the reviews. We received praise for our innovativeness with statements like, this was a “major and extremely promising innovation.“ In terms of our qualifications we were told, “the investigator has the appropriate expertise and experience to lead a team that includes well established experts in the contributory disciplines and research approaches.” These comments, in various form, were echoed by all three reviewers. The ONLY negative comment was, “It is a methodological study and, therefore, less compelling than a study that would directly address a health problem.” There were no other negative comments but the grant was not discussed. Although I am obviously biased because it was my grant so take my opinion with a grain of salt, but, I think this pretty nicely illustrates a much larger problem that we must face about our methods. Specifically, How can we shift our culture to recognize many this new tools are potentially more appropriate than our tried and true previous tools? Indeed, as Vicente points out, “we should keep in mind that behavioral drivers would be the same as this is something coming/inherited from our society (our Occidental behavioral drivers have almost not changed for last centuries) and technologically independent”
  • So, again, our two broad questions are listed here. Let’s take the next 12 minutes to discuss these. In terms of process, perhaps we can try to dedicate 4 minutes for each question, and use the 4 final minutes to sum up and obtain an starting answer to how use/developing new research methods to more rapidly advance in the field but we are open to whatever the group thinks would be the best process.
  • Provocation at conference

    1. 1. Methods ProvocationProvocation focuses on using/developing newresearch methods to more rapidly advance thefield and exploit the rapid pace of technologicaldevelopment Eric Hekler Vicente Traver
    2. 2. 500,000th App Accepted on App Store2005 2006 2007 2008 2009 2010 2011 2012Conceive Submit Conduct the studyof a study Grant Gather Receive Submit publications Pilot Data Funding for review Flickr – Metrix X
    3. 3. Flickr – Mathieu Struck Translation
    4. 4. The new methods and theories we are proposing are not reinforced in the current academic system. Current values? New Values? • Knowledge for knowledge • Relevance and social value sake • Pre-planned research • Iterative experimentation • Discipline-driven • Interdisciplinary teams and community engagement • Fiscal disconnect • Fiscal sustainability • Post-hoc Peer-Reviewed • Open Source Github-like Wiki Publications with real-time peer reviewFlickr – Mathieu Struck
    5. 5. How might we create new processes for reinforcing the actions of academics to foster the rapid development of new methods and theories? Flickr – Mathieu Struck
    6. 6. What questions other than “does it work” should we ask? What are our tools NOT good at?Flickr – Clyde Bentley
    7. 7. What are the most importantnew tools for us to becomeproficient at using to answer our new questions? Flickr – Cowboy Ben Alman
    8. 8. How can we convince ourpeers that these tools aremore appropriate than the tried and true tools?
    9. 9. Provocative Questions How might we create new methods for reinforcing the actions of academics to foster the rapid development of newUsing/developing methods and theories? new researchmethods to morerapidly advance the field Which new tools are best for which questions and how do we enable our peers to recognize the value of alternative methods?