The Challenge of Evaluating Electronic Decision Support in the Community

594 views

Published on

Jim Warren
Department of Computer Science
University of Auckland
(P15, 17/10/08, Usability stream, 1.50pm)

Published in: Health & Medicine, Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
594
On SlideShare
0
From Embeds
0
Number of Embeds
4
Actions
Shares
0
Downloads
0
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

The Challenge of Evaluating Electronic Decision Support in the Community

  1. 1. The Challenge of Evaluating Electronic Decision Support in the Community Jim Warren, Rekha Gaikwad, Thusitha Mabotuwana, Mehnaz Adnan, Tim Kenealy, Beryl Plimmer, Susan Wells, Paul Roseman and Karl Cole HINZ Conference & Exhibition Rotorua, October 2008
  2. 2. Why Study Electronic Decision Support? <ul><li>Because decision support is needed (computers are good at computing!) </li></ul><ul><ul><li>To promote evidence-based medicine </li></ul></ul><ul><ul><ul><li>Eg., to align therapy to risk </li></ul></ul></ul><ul><ul><li>To remind </li></ul></ul><ul><ul><ul><li>As long as less than 50% of medical records indicate smoking status, we’ll need electronic decision support </li></ul></ul></ul><ul><li>Huge opportunity in chronic disease management and in General Practice medicine </li></ul><ul><li>BUT it’s subtle to get it right </li></ul><ul><ul><li>Don’t want to burden a busy professional </li></ul></ul><ul><ul><li>Need to reward a little input with really worthwhile output </li></ul></ul><ul><ul><li>And not just remind clinician of what they were going to do anyway! </li></ul></ul>
  3. 3. Secret to Success <ul><li>Kawamoto et al. screened 10,000 articles and found four independent predictors: </li></ul><ul><ul><li>automatic provision of decision support as part of clinician workflow </li></ul></ul><ul><ul><li>provision of recommendations rather than just assessments </li></ul></ul><ul><ul><li>provision of decision support at the time and location of decision making </li></ul></ul><ul><ul><li>computer based decision support </li></ul></ul><ul><li>Sounds good, but how do we know a particular product in a particular context is succeeding with these? </li></ul>Kawamoto K, Houlihan C, Balas E, Lobach D. Improving clinical practice using clinical decision support systems: a systematic review of trials to identify features critical to success. BMJ 2005; 330(7494), 765.
  4. 4. Failure <ul><li>PRODIGY implemented 120 GP decision support modules for the UK, integrating to all major PMS applications </li></ul><ul><li>Experimental evaluation of a PRODIGY implementation on 60 General Practices in northeast of England found no clinical effect </li></ul><ul><li>Qualitative follow-up found serious problems with </li></ul><ul><ul><li>timing of the guideline trigger </li></ul></ul><ul><ul><li>ease of use of the system </li></ul></ul><ul><ul><li>helpfulness of the content </li></ul></ul><ul><li>In short, they just got it wrong </li></ul>Eccles M, McColl E, Steen N, Rousseau N, Grimshaw J, Parkin D, Purves I. Effect of computerised evidence based guidelines of asthma and angina in adults in primary care: cluster randomised controlled trial. BMJ 2002; 325(7370), 941-8. Rousseau N, McColl E, Newton J, Grimshaw J, Eccles M. Practice based, longitudinal, qualitative interview study of computerised evidence based guidelines in primary care. BMJ 2003; 326(7384): 314-21.
  5. 5. AMIA Framework <ul><li>The lack of uptake of CDSS has been recognised by the American Medical Informatics Association (AMIA)* </li></ul><ul><ul><li>acknowledging “limited extent” clinical decision support is being leveraged in the US </li></ul></ul><ul><li>Roadmap identifies three 'pillars‘ </li></ul><ul><ul><li>Best knowledge available when needed </li></ul></ul><ul><ul><li>High adoption and effective use </li></ul></ul><ul><ul><li>Continuous improvement of knowledge and clinical decision support methods </li></ul></ul><ul><li>First of these might be achieved with technology (AMIA recommends SOA), but latter two require detailed and contextualised usability evaluation </li></ul>* A Roadmap for National Action on Clinical Decision Support , June 2006
  6. 6. A look at PREDICT CVD/Diabetes <ul><li>Good fit to Kawamoto’s four criteria </li></ul><ul><ul><li>Although data needs can exceed what’s in the PMS </li></ul></ul><ul><li>Enjoying some success </li></ul><ul><ul><li>At January 2008, over 100,000 risk assessments on over 50,000 patients in NZ </li></ul></ul><ul><li>But is its usability optimal? </li></ul><ul><ul><li>E.g., are the outputs understood, appreciated and used as intended? </li></ul></ul><ul><li>Detailed cost/benefit understanding </li></ul><ul><ul><li>Just where does the time go when PREDICT is deployed in a consult? </li></ul></ul><ul><ul><li>E.g., time spent entering data, reading recommendations, explaining risks to a patient </li></ul></ul><ul><li>Engaged in study (on University internal grant) to explore optimisation of PREDICT (and develop NIHI’s evaluation infrastructure) </li></ul>
  7. 7. Our Method of Investigation <ul><li>Video </li></ul><ul><ul><li>We needed to see (GP/patient and on-screen, with audio) what was going on </li></ul></ul><ul><li>Opted for use of medical actors </li></ul><ul><ul><li>Bring GPs into NIHI Health Technology Lab for recording </li></ul></ul><ul><ul><li>Role-play 3 PREDICT use cases </li></ul></ul><ul><ul><ul><li>Woman with surprisingly high risk </li></ul></ul></ul><ul><ul><ul><li>Indian male </li></ul></ul></ul><ul><ul><ul><li>Maori male </li></ul></ul></ul>
  8. 8. Morea Usability Analysis Workbench
  9. 9. Challenge: Rationalisation and Recollection <ul><li>Questionnaires </li></ul><ul><ul><li>Feasible (except for response bias), but what do we really find out? </li></ul></ul><ul><ul><ul><li>Something can bother a user a lot, yet not be high prevalence </li></ul></ul></ul><ul><ul><ul><li>If users don’t like something, how does this reliably help you fix it? </li></ul></ul></ul><ul><li>Focus groups </li></ul><ul><ul><li>More logistically difficult; use more time, but of much less people </li></ul></ul><ul><ul><li>Dialogue among clinicians can ‘unpack’ phenomena </li></ul></ul><ul><li>Better if we can combine with direct observation </li></ul><ul><ul><li>Then a designer can see just what’s going wrong </li></ul></ul>
  10. 10. Challenge: Consent, Recruitment and the Problem with Video <ul><li>Videotape and General Practice can be a little difficult to mix </li></ul><ul><li>Most decision support tools are only used on a proportion of patients </li></ul><ul><li>i.e., only want to recruit and to invoke equipment sporadically </li></ul>
  11. 11. Challenge: Realistic Test Cases and Software Environment <ul><li>Sounds easy enough to put a ‘realistic’ patient into a PMS </li></ul><ul><li>But when does their record begin? </li></ul><ul><ul><li>Our scenario began with a sick certificate for flu the previous week (now GP wanted to assess CVD risk) </li></ul></ul><ul><ul><li>But we need to set up complete history, including that visit a week ago </li></ul></ul><ul><li>Time moves on! </li></ul><ul><ul><li>‘ A week ago’ keeps moving </li></ul></ul><ul><ul><li>Actually very hard to synthesize patients </li></ul></ul><ul><ul><li>And to keep them current </li></ul></ul>
  12. 12. Challenge: Accepting the Cost <ul><li>GP time is remarkably expensive </li></ul><ul><ul><li>Lost time is lost business, against many fixed operating expenses </li></ul></ul><ul><ul><li>Fully loaded rate is high enough to be opposed by funding bodies and ethics boards </li></ul></ul><ul><li>Researcher time is much cheaper, but still adds up </li></ul><ul><ul><li>Analysing video takes a LOT of time (around 10 hours per hour of video) </li></ul></ul>
  13. 13. Opportunity: Exit Interviews and Focus Groups <ul><li>Catch the clinician (e.g., GP or nurse) upon completing use of the system </li></ul><ul><ul><li>Can get details while fresh and associated with a particular occasion </li></ul></ul><ul><ul><li>Can then take the experiences into focus groups for discussion </li></ul></ul><ul><li>Intensive of researcher (interviewer) time if the decision support usage prevalence is low </li></ul>
  14. 14. Opportunity: ‘Discount’ Usability Testing <ul><li>Can catch a lot of usability problems by reviewing carefully even with a non-novice user </li></ul><ul><ul><li>Cognitive walkthrough </li></ul></ul><ul><ul><ul><li>Step carefully through each required user action and question whether ‘normal’ user would see how to do it, and would get appropriate feedback </li></ul></ul></ul><ul><ul><li>Protocol analysis </li></ul></ul><ul><ul><ul><li>Person acting as user ‘talks aloud’ as they use the system to expose what they’re thinking </li></ul></ul></ul>
  15. 15. Opportunity: Automated Logging from the Vendor/Host <ul><li>Vendors can add logging of user actions </li></ul><ul><ul><li>Could (possibly) impact performance </li></ul></ul><ul><ul><li>Logs can get large </li></ul></ul><ul><ul><li>Requires programming into the application </li></ul></ul><ul><li>Web-based application make this more difficult </li></ul><ul><ul><li>Natural for web server host to get only results from screens </li></ul></ul><ul><ul><ul><li>Can’t tell whether a time of 5 minutes between screens meant five minutes more or less continuous data entry or one minute of data entry and four minutes having a great discussion with the patient </li></ul></ul></ul>
  16. 16. Opportunity: CDSS-prompted Enrolment and Video Recording <ul><li>System could help out with recruitment </li></ul><ul><ul><li>Automatically recognize candidate patient and prompt clinician to enroll them in trial </li></ul></ul><ul><ul><li>Could also implement sampling strategies and spread the burden of obtaining consents across a wide user base </li></ul></ul><ul><ul><li>Does require cameras on all PCs, but this is becoming more common for Skype and Webex </li></ul></ul>
  17. 17. Opportunity: Exploiting Electronic Data Interchange Standards <ul><li>GP-to-GP transfer standards would allow new patient cases to be entered into a PMS </li></ul><ul><li>Could use these to enter synthetic cases without the need for interactive data entry </li></ul><ul><ul><li>Much easier to keep up to date if we can synthesize an HL7 CDA or OpenEHR data file! </li></ul></ul>
  18. 18. Discussion <ul><li>Engineers need details about system use for improving decision support system usability and effectiveness </li></ul><ul><li>Questionnaires and measuring holistic outcomes are insufficient to provide reliable guidance for doing better </li></ul><ul><ul><li>We really need video </li></ul></ul><ul><li>Exit interviews, automated logging and system-prompted recruitment could provide a high-quality field evaluation </li></ul><ul><li>GP-to-GP transfer technology may aid laboratory evaluation </li></ul>
  19. 19. Conclusion <ul><li>We need to accept the cost of electronic decision support evaluation </li></ul><ul><li>We need to accept the need for fine-grained evaluation to inform improvement </li></ul><ul><li>We need better tools (logging and recruitment support within applications) </li></ul><ul><li>We need clinician engagement (with time, cost and workflow implications thereof) </li></ul>
  20. 20. Questions? <ul><li>Contact Jim Warren (jim@cs.auckland.ac.nz) </li></ul>

×