Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

AI in Mental Health and Wellbeing - Current Applications and Trends

This is a presentation I gave at TransTech 2018 in the Bay Area - drawing from our research on mental health AI startups (https://emerj.com/ai-sector-overviews/diagnosing-and-treating-depression-with-ai-ml/).

The full video of this presentation is available online:
https://www.youtube.com/watch?v=CvrqoPpYF94

  • Login to see the comments

AI in Mental Health and Wellbeing - Current Applications and Trends

  1. 1. AI in Mental Health and Wellbeing - Current Applications and Trends Daniel Faggella, CEO at Emerj
  2. 2. Presentation Outline ● Background in brief ● The unmet needs of mental health ● Category 1: Conversational Interfaces ○ Purported benefits / limitations ● Category 2: Behavioral Pattern Recognition ○ Purported benefits / limitations ● Believable benefits and predictions ● Risks to consider ● Concluding takeaways on this sector
  3. 3. Emerj (formerly TechEmergence) ● Emerj is a market research firm focused the implications of AI for business / gvmnt leaders ● Organizations planning very expensive technology initiatives use us to support critical decisions ● We work with clients as large as the World Bank, and startups who’ve only raised $10MM
  4. 4. Emerj (formerly TechEmergence) ● We examine AI across three dimensions: ○ Applications (Possibilities) ○ Implications (Probabilities) ○ Strategy (Plans) ● Diagnostics, health, and mental health have been a focus for us for over two years, and this presentation summarizes our findings at the intersections of AI and mental health
  5. 5. Note ● This presentation is remarkably short (20 minutes), so I will not be able to talk about the full breadth of AI solutions ● My goal is to highlight the critical points ● If you’re interested in our deeper research into individual mental health solutions, please see the last slide in this presentation, which lists our deeper reports and app comparisons
  6. 6. Unmet Needs of Mental Health
  7. 7. By the Numbers ● As of 2015, of the overall NIH funding for FY 2015: $30 billion, less than 10% went to mental health and addiction-related programs ● More than 10% of lost years of healthy life and over 30% of all years lived with disability; WHO, 2001 ● Depression is estimated to cause 200 million lost workdays each year at a cost to US employers of $17 to $44 billion ● Also mental illness it is hard to measure and quantify (unlike, say, pneumonia, where we can be sure when you have it, and when you don’t)
  8. 8. Category 1: Conversational Interfaces
  9. 9. Vendors ● Vendors: Wysa, Ginger.io, Woebot, etc ● Purported benefits: ○ Leading indicators for therapists/Drs ○ Healthy tips / reminders via chat (with machines and humans ○ Connection to a real human coach/therapist ● Limitations: ○ Chatbots still somewhat nascent ○ Chat is only one source of data
  10. 10. Category 2: Behavioral Pattern Recognition
  11. 11. Vendors ● Vendors: Ginger.io, Sunrise Health (now “Marigold”), Mindstrong, HelloJoy ● Purported benefits: ○ Leading indicators for therapists/Drs ○ On-demand connection to professionals ○ Extracting data from multiple sources (text context in and out of app, location, etc) ● Limitations: ○ Measuring wellbeing = hard
  12. 12. Vendors ● Data sources purportedly tracked ○ Exercise / Sleep ○ Location / movement ○ Self-report assessments (Daily, weekly) ○ Contents of text messages ○ Phone use / activity ● There are questions around both the reliability of self report data (exercise), and the conclusiveness of phone use data ● There are privacy concerns with this sensitive personal data
  13. 13. Believable Benefits and Predictions
  14. 14. ● We suspect that the core value proposition of the mental health space lies in finding patterns across new data streams ○ Sources: Phone activity, location, app use, content of texts, estimated hours of sleep, etc ■ Correlating new streams of real-time data (not self-reported, but extracted from the world) to proxies for user wellbeing (psychiatrist visits, self-reported wellbeing, suicidal thought frequency, etc) seems promising, and AI can be part of this mix ● We imagine a future where this proxy data can give a much more accurate estimate of the wellbeing of a patient
  15. 15. ● We suspect that it may be years before AI can realistically suggest actions or behavior change for the human ○ These suggestions require much more context than a chatbot or iPhone can tell us (from childhood trauma, to fears, to other medical issues, etc), and probably should stay in the purview of doctors
  16. 16. ● Replicating human interaction is challenging, and seems potentially dangerous ○ Text chat is a limited channel, and conversational AI is exceedingly limited ■ There are a number of examples online of very disjointed or nonsensical conversations with the bots ● Text is one potential source of feedback and data from the user, and it could be a critical one for coaxing out high-level risks, like: ○ Asking the patient if they’re okay (if other data signals indicate that they might be at risk) ● We feel less optimistic about chatbots encouraging behavior change, or humans being accountable to chatbot suggestions
  17. 17. ● Conversational interfaces might be able to reach: ○ People unable to afford therapy ○ People who might be too shy or ashamed to try therapy ○ People in rural or remote areas with no therapist access ● While the opportunity to help those who cannot or choose not to use traditional therapy, it is unclear what the best mental health intervention is for this group of people ● It is blatantly evident that more research is needed in order to discern the impact of different kinds of digital treatment (or no treatment)
  18. 18. ● Good news: Startups in this sector seem to understand that the purpose of their applications are to keep patients safe, and hand them off to doctors ● The value proposition of Mindstrong (below) is representative of many other firms - with “preemptive” insight being emphasized:
  19. 19. Risks to Consider
  20. 20. Business Model ● Business models create incentives: ○ Monthly fee: ■ Encourage forever use, whether or not the user needs it ○ Pay per referral to therapist: ■ Refer people to therapists, whether or not the user needs it ○ Pay per chat conversations with a therapist or “coach”: ■ Hire inexperienced / inexpensive “coaches” onto the platform ■ Refer people to chat therapists, whether or not it’s actually necessary or preferable
  21. 21. Business Model ● Startups have to find a way to be profitable, but it’s unclear whether the for-profit incentives might bend the products in a direction that doesn’t serve users ● This isn’t necessarily the fault of startups, but it is a problem that will have to be worked out, and something that users should be aware of
  22. 22. Long-Term Effects ● More scientific testing and trials are required here ○ We need long-term double-blind studies to assess the actual value (and not research conducted by founders themselves) ● Testing would need to be done for any of the target groups or target conditions ○ People in rural / remote places ○ Men / women ○ Age of patient ○ Depression / bipolar / anxiety / other
  23. 23. Conclusion and Takeaways
  24. 24. ● AI in mental health and wellness is likely to unearth solid predictors of wellbeing and mental health risk - but we have no robust proof at present ● It will take much longer to achieve a level whereby AI can suggest actions and behaviors in order to tangibly improve wellbeing or reduce risk ○ Even in 2-3 years, we don’t expect that these “suggestions” will get beyond very generic and basic advice (i.e. “get 8 hours of sleep” or “stay hydrated” or “do your breathing exercise”)
  25. 25. That’s All, Folks The end. I’ve included a list of some of the resources from this presentation on my final “References” slide. Feel free to send along an email with questions, or for a copy of the slides: dan@emerj.com Twitter - @danfaggella emerj.com
  26. 26. Resource List Outside resources: ● https://www.thenationalcouncil.org/topics/federal-budget/ ● http://www.who.int/mental_health/economic_aspects_of_mental_health.pdf ● https://www.wsj.com/articles/meet-the-chatbots-providing-mental-healthcare-1533828373 ● https://www.himssinsights.eu/mental-health-chatbots-future-therapy ● https://woebot.io/blog/why-we-need-mental-health-chatbots/ Emerj resources: ● https://www.techemergence.com/chatbots-mental-health-therapy-comparing-5-current-ap ps-use-cases/ ● https://www.techemergence.com/diagnosing-and-treating-depression-with-ai-ml/ ● https://www.techemergence.com/ai-for-mobile-medical-diagnostics-current-applications/

×