Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Conducting a Summative Study of EHR Usability: Case Study


Published on

At least year’s conference, a group of us explored the complexity involved with evaluating the usability of Electronic Health Records: The wide range of user profiles and characteristics, a seemingly infinite number of tasks, and challenges in obtaining realistic data while respecting HIPAA regulations. In December, the Usability team at athenahealth conducted a summative usability study of [product]. In this Case Study, the Kris will discuss how the team navigated the challenges of summative EHR evaluation to conduct this study. Topics include task selection, recruiting, metric selection, logistics, and lessons learned.

Published in: Design, Business, Technology
  • Be the first to comment

Conducting a Summative Study of EHR Usability: Case Study

  1. 1. Conducting a Summative Study of EHRUsability: Case Study from athenahealth Kris Engdahl May 7, 2012
  2. 2. How do we define EHR usability? What is an electronic health record (EHR)? • Electronic version of paper charting, plus capacity for electronic data exchange; managed by healthcare professionals • Used by single-physician practices, large healthcare networks, and everything in between • Different from a personal health record, which is managed by the patient What is usability? (Choose your definition – here is the NIST definition) • ISO 9421-11: “The extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use.”
  3. 3. Why is EHR Usability such a hot topic? Healthcare is a large industry • $2.5 trillion, 14.3 million jobs, 17.3% of GDP It affects all of us • We are all once and future patients • We pay for everyone’s healthcare, either through insurance or taxes Recent healthcare reform legislation encourages the adoption of EHRs • Health Information Technology for Economic and Clinical Health (HITECH) Act includes incentives for “meaningful use” of EHRs • Office of the National Coordinator (ONC) is paying attention to the field of usability as it evaluates EHRs for certification Poor EHR usability can put patients at risk • Increasing attention is being paid to patient safety in regards to EHR usability Healthcare providers are demanding better EHR usability
  4. 4. Challenges in EHR Usability Testing Who do we test with? • Who are “representative” users? • How do we get access to them? What tasks do we test? • Different kinds of users have widely varying tasks What product “version” do we test? • Most EHRs are highly customized for individual practices • How useful is a test of a “generic” version? What data do we use for testing? • Participants are always distracted by unrealistic data – so it has to be realistic • Real data is protected by HIPAA It is challenging, BUT…
  5. 5. Meeting the challenge Know why you’re testing • Why do a summative test? • What will you do with the data? Manage the scope • How much time do you have? • What kinds of resources do you have? • How much can you reasonably do? Prepare thoroughly • Consider all the pieces • Plan for logistics and timing • Prepare the testing team Do it!
  6. 6. Scope: You could do this forever Testing an EHR could mean testing everything all these people do And then there are the other dimensions that make more user groups • Age, specialty, tech-savviness, domain experience, etc.
  7. 7. Knowing why helped us scope our test Why we conducted a summative study • Planning was underway for 2012, with UX changes in it • We wanted to be able to quantify the improvements we were embarking on • This would be the “before” measurement This determined the tasks and users we selected • We focused on areas that we intended to re-examine in upcoming releases  We had completed a heuristic review and a patient safety review  We had identified areas for UX work in 2012 • We focused on clinicians’ tasks in an ambulatory setting • We focused on tasks that are common to a number of different specialties • We presented the list, with time limits, to key stakeholders for prioritization
  8. 8. Here’s a rough view of an internist’s work in an office visitSource:
  9. 9. Scoping the recruit Challenge: Recruit “representative” users • Clinicians vary by specialty, practice size, age, experience, gender, EHR knowledge • Just how many do you need to recruit, anyway? How we narrowed the list • We screened for clinicians – MD, DO, NP, PA • We asked for a mix of specialty, gender, years of experience, and practice size • We asked for people who used EHRs but had not used ours • We screened for people who commonly did tasks we were testing Recruiting itself • Hired a professional recruiting firm • We paid participants through the recruiter • We recruited 22 hoping to get 20 participants
  10. 10. Determining the environment to test Challenge: Most EHRs are highly customized by clients • Installed EHRs have lots of custom programming • Our cloud-based EHR is highly configurable • And it changes every month Challenge: Data has to be realistic, but not real • Participants will be distracted by incomplete or incorrect data • It is wrong (and illegal) to use actual medical data What we did • We modeled the environment we tested with on an actual client environment • We chose a practice that had very little configuration (as “vanilla” as possible) • We scrambled the data from the practice so all records were deidentified • We arranged to be able to copy a our set-up test environment for each participant
  11. 11. Managing participant training Challenge: EHRs are not walk-up-and-use applications • Most implementations include 2-5 days of training Challenge: We had 5 moderators • With varying expertise with the application • Any 5 people will say things differently What we did • We worked with Training professionals to develop a training script • ~8 minutes long • Covered key concepts / areas of the interface • Walked through the tasks that we tested • Each moderator followed the script, so each participant had the same training • We printed out the screen shots from the training walkthrough as “Help”
  12. 12. Challenge: What do you measure?Note: This data is for illustration only
  13. 13. Start with a reasonable scope Know how your data will be used • What are you comparing? • Who will use the data, and how? Prioritize tasks and user groups • Most common tasks • Critical tasks • Tasks that carry patient safety risks • “Disparity-oriented use cases” (NISTIR 7769) • NIST has some user scenarios in NISTIR 7804 Determine a reasonable sample size • How much money do you have? • How much time do you have? • (see “who will use the data, and how” above)
  14. 14. Determine how realistic you can get Balance customization with comparability • Least configuration / customization? • “Typical” configuration / customization? Be sure to get realistic – but not real - data • Creating realistic data from scratch will be time- and knowledge-intensive • Patients’ real data is covered by HIPAA • Would be nice if NIST had importable patient charts for their scenarios Decide how to handle training • How much training do users normally get with what you’re testing? • How much time can you get with participants? • Can you develop customized training for the tasks you will test?
  15. 15. Plan for a specialized recruit Prepare the screener(s) • NISTIR 7804 has a sample screener • You may need more questions, depending on tasks and users Allow time for recruiting • The more specific your screener, the more time you need • Book your professional recruiter in advance Don’t skimp • On recruiting • On incentives Be flexible to accommodate participants • Medical people are wicked busy • Be prepared to test early in the morning and in the evening
  16. 16. Prepare your moderators Get any equipment you need ahead of time • Technical problems tend to happen when you least expect them • Plan to have backup plans for everything Schedule time to train moderators • Product: Effective paths, ineffective paths (and ways back), accellerators • Test script: Starting points, time limits, what to look for Plan for multiple pilot sessions • Ideally at least one for each moderator, with everyone watching • Discussions ahead of time about success and errors  Identify likely errors  Get consensus on definitions of success • Discussions on how to handle possible “situations”
  17. 17. Use good test hygiene Schedule sessions reasonably • Time between sessions, for rest and reset • Do not overwork moderators • Make sure moderators eat and sleep Normalize observations and analysis • Encourage multiple observers • Document decisions about success and errors • Do a consistency pass of all recordings • Double-check data storage, analysis, and statistics
  18. 18. References General info on healthcare industry: • • annual-jump/1117 Forecasts on health care spending • • What’s an EHR? • basics/ Articles on EHR Usability • Health Insurance Portability and Accountability Act of 1996 (HIPAA) privacy: •
  19. 19. References Government documents on EHR usability • NIST site on the usability of Healthcare IT:  NISTIR 7769 – Human Factors Guidance to Prevent Healthcare Disparities with the Adoption of EHRs  NISTIR 7741 - NIST Guide to the Processes Approach for Improving the Usability of Electronic Health Records  NISTIR 7742 – Customized CIF Format Template for Electronic Health Record Testing  NISTIR 7743 – Usability in Health IT: Technical Strategy, Research, and Implementation  NISTIR 7804 – Technical Evaluation, Testing, and Validation of the Usability of Electronic Health Records • AHRQ articles:  EF.pdf  EF.pdf Information on EHR usability and patient safety • • • _and_converging_technologies/