Got Data, Now What? Analyzing Usability Study Results.


Published on

Presented to the LAMA/MAES Using Measurement Data for Library Planning and Assessment Committee at the ALA 2005 Annual Conference, June 26, 2005, Chicago, Illinois.

Published in: Education
  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide
  • Many of these complaints could be remedied through better web design.  To help us understand what factors impede user success in finding information, we all read Jared Spool’s Web Site Usability: A Designer's Guide. North Andover, MA: User Interface Engineering, 1997.
     Links must be consistent and predicable. Many people do not experiment and are uncomfortable guessing.  The better a user can predict where a link will go, the more successful she/he will be.
     Part of the consistency issue suggests that designers group like things on the same page, and conversely do not mix unlike things on a page.
     Be consistent with language, if you call something one word on one page, use the same terminology on succeeding pages.
     Novice users generally do not to scroll down a page,  thus the most important information should be on the first screen, or there should be indications that the screen continues “Below the Fold” (to use newspaper jargon that has been adopted by the web)
     People scan web pages looking for keywords, rather than reading the text carefully.
     Users who are task oriented, i.e. looking for information, do not like distractions made by  animations or sound. (Don’t use animation or sounds)
     Make links look like links - it is not a good idea to imbed links within text or to hide links in graphics
     Distinguish text from graphics
     Avoid jargon
  • Jakob Nielsen
    These are ten general principles for user interface design. They are called "heuristics" (furthering investigation; encourage a person to problem solve, experiement) because they are more in the nature of rules of thumb than specific usability guidelines.
    Visibility of system status
    The system should always keep users informed about what is going on, through appropriate feedback within reasonable time.
    2. Match between system and the real world
    The system should speak the users' language, with words, phrases and concepts familiar to the user, rather than system-oriented terms. Follow real-world conventions, making information appear in a natural and logical order.
    3. User control and freedom
    Users often choose system functions by mistake and will need a clearly marked "emergency exit" to leave the unwanted state without having to go through an extended dialogue. Support undo and redo.
    4. Consistency and standards
    Users should not have to wonder whether different words, situations, or actions mean the same thing. Follow platform conventions.
    5. Error prevention
    Even better than good error messages is a careful design which prevents a problem from occurring in the first place. Either eliminate error-prone conditions or check for them and present users with a confirmation option before they commit to the action.
    6. Recognition rather than recall
    Minimize the user's memory load by making objects, actions, and options visible. The user should not have to remember information from one part of the dialogue to another. Instructions for use of the system should be visible or easily retrievable whenever appropriate.
    7. Flexibility and efficiency of use
    Accelerators -- unseen by the novice user -- may often speed up the interaction for the expert user such that the system can cater to both inexperienced and experienced users. Allow users to tailor frequent actions.
    8. Aesthetic and minimalist design
    Dialogues should not contain information which is irrelevant or rarely needed. Every extra unit of information in a dialogue competes with the relevant units of information and diminishes their relative visibility.
    9. Help users recognize, diagnose, and recover from errors
    Error messages should be expressed in plain language (no codes), precisely indicate the problem, and constructively suggest a solution.
    10. Help and documentation
    Even though it is better if the system can be used without documentation, it may be necessary to provide help and documentation. Any such information should be easy to search, focused on the user's task, list concrete steps to be carried out, and not be too large.
  • Incorporate real users. Web site testing involves users who are representative of the targeted audience. By engaging real users, developers can understand the specific needs of users. (Norlin, 2002, p.7)
  • Observe and record meticulously. The purpose of the test is to observe the participants’ ability to perform the said tasks; therefore, record comments or questions about the Web site as well as users’ behaviors. This observation and recording distinguishes usability testing from focus groups, surveys, or beta testing. (Norlin, 2002, p.7)
  • Shneiderman’s ground-breaking work in human-computer interaction (2004) identifies five factors for benchmarking the usability of an interface.
    1. time to learn and speed of performance- measured by carefully timing a set of tasks provided to the tester.
    2. rate of errors - Users are carefully watched to determine
    3. Retention over time - judged by the tester’s ability to complete similar tasks throughout the course of testing
    4.User’s subjective satisfaction - gauged by both the spoken comments during testing as well as a follow-up interview and questionnaire
  • Reliability:
    According to Hernon (1994, p. 2)
    Reliability is “the extent to which the same results are produced on repeated samples of the same population.”
    Hernon, P. 1994. Statistics: A Component of the Research Process. Norwood, N.J.: Ablex.
    Powell and Connaway, 2004, p. 44
    Construct Validity – the extent to which an instrument measures what it is designed to measure
    Internal Validity – accurately identifies causal relationships
    External Validity – generalizability; repeating the study under a variety of circumstances or conditions (e.g., other times, people, places); repeating a study or retesting to see if the results will be repeated in another setting. Similar to reliability.
  • Blank Map Bit Map from Cliff
  • Countries Selected Bit Map from CLiff
  • Data Selected Bit Map from Cliff
  • Selected Data Bit Map from Cliff
  • Got Data, Now What? Analyzing Usability Study Results.

    1. 1. Got Data, Now What? Analyzing Usability Study Results Lynn Silipigni Connaway June 26, 2005 Presented at the ALA 2005 Annual Conference Chicago, IL LAMA/MAES Using Measurement Data for Library Planning and Assessment Committee
    2. 2. Usability Testing: Why? “Probably the best reason to test for usability is to eliminate those interminable arguments about the right way to do something. With human-factors input and testing, however, you can replace opinion with data. Real data tend to make arguments evaporate and meeting schedules shrink.” (Fowler, 1998, Appendix, p. 283)
    3. 3. Usability Testing: Definition  Degree to which a user can successfully learn and use a product to achieve a goal  Research methodology • Evaluation • Experimental design  Observation and analysis of user behavior while users use a product or product prototype to achieve a goal (Dumas and Reddish, 1993, p.22)  “User-centered design” process involving user from initial design to product upgrade (Norlin and Winters, 2002)  Approach is to be a servant to the users of a system NOT to be subservient to technology (Gluck, 1998)  Goal is to identify usability problems and make recommendations for fixing and improving the design (Rubin, 1994)
    4. 4. Usability Testing: Background  Relatively new methodology (Norlin and Winters, 2002) • Origins in aircraft design • Traced back to marketing • Development of a product • Popular in 1980s with widespread access to computers • Initiation of human computer interface usability studies • Evolved from human ethnographic observation, ergonomics, and cognitive psychology • Qualitative and quantitative data
    5. 5. Usability Testing: Purpose  Evaluation tool  Identify problem areas  “Determine the fit of the design to the intended users” (Norlin and Winters, 2002, p. 5)
    6. 6. Usability Testing: Suitable Questions  What is the best layout for a web page?  How can you optimize reading from PDAs and small screen interfaces?  Which online fonts are the best?  What makes an e-commerce site difficult to use?  Can individual personality or cognitive skills predict Internet use behavior?  How can library collection holdings and library data be represented geographically?
    7. 7. Usability Testing: Principles     Keep the end user in mind Achieve superiority through simplicity Improve performance through design Refine and iterate (Norlin and Winters, 2002, p.10)
    8. 8. Usability Testing: Web Design Criteria          Links must be consistent and predictable Group like things on the same page Be consistent with language Most important information should be on the first screen Provide keywords for quick reading/scanning Do not use animation or sounds Make links look like links Distinguish text from graphics Avoid jargon (Spool, 1999)
    9. 9. Usability Testing: Web Design Criteria  Ten Usability Heuristics (Nielsen) • Visibility of system status • Match between system and the real world • User control and freedom • Consistency and standards • Error prevention • Recognition rather than recall • Flexibility and efficiency of use • Aesthetic and minimalist design • Help users recognize, diagnose, and recover from errors • Help and documentation
    10. 10. Usability Testing: Web Design Criteria  Goals for user-centered design • Enable users to • Achieve their particular goals and meet their needs • Move quickly and with few errors • Create a site that users like • More likely to perform well on a product that provides satisfaction
    11. 11. Usability Testing: Methodology  Artificial environment (laboratory) • Maintain more control • May provide more specific data on a particular feature  Natural environment • Better holistic representation of real people doing real work
    12. 12. Usability Testing: Methodology  Four types of usability tests (Rubin, 1994, p. 31-46) • Exploratory test – early product development • Assessment test – most typical, either early or midway in the product development • Validation test – verification of product’s usability • Comparison test – compare two or more designs; can be used with other three types of tests
    13. 13. Usability Testing: Methodology    Develop problem statements, objectives, and/or hypotheses Identify and select participants who represent target population • May or may not be randomly selected Select test monitor/administrator • Empathetic • Impartial • Good communicator • Good memory • Able to follow test structure • Able to react spontaneously to situations that cannot be anticipated • Allow user time for task • Don’t rescue the user • Continue with the plan if mistakes occur
    14. 14. Usability Testing: Methodology  Design test materials • Screening questionnaire • Provides user profile • Ascertains pretest attitudes and background information • Provides information about participants’ previous knowledge and experience • Orientation script • Describes the test to participants • Aids in understanding the participants’ performance • Data logger materials • Data collection instrument for categorizing participants’ actions • Can note time to match with videotape recording
    15. 15. Usability Testing: Methodology  Design test materials • Non-disclosure and tape consent forms for legal protection • Task list • List of actions participants will execute • Desired end results • Motives for performing task • Actual observations monitor will record • State of system
    16. 16. Usability Testing: Methodology  Design test materials • Posttest questionnaire • All participants asked the same questions • Gather qualitative information and precision measurements • Debriefing guide • Structure and protocols for ending the session • Participants explain things not apparent in actions • Motive • Rationale • Points of confusion
    17. 17. Usability Testing: Methodology  Test materials and equipment  Conduct the test • Represent the actual work environment • Users are asked to think aloud • Observe users while using or reviewing the product • Probe • Controlled and extensive questioning • Collect quantitative and qualitative data and measures • Record comments or questions about the product • Observe and document users’ behaviors
    18. 18. Usability Testing: Methodology  Debrief  Analyze the data • Diagnose and recommend corrections • Categorize and identify problems with the product • Identify solutions • Qualitative analysis • Textual notes from debriefing • Read responses • Summarize findings
    19. 19. Usability Testing: Methodology  Analyze the data • Quantitative analysis • Questionnaires • Screening • Posttest • Triangulation to validate findings • Data from questionnaires, observations, screen tracking software, comments, and open-ended questions
    20. 20. Usability Testing: Interpret Data  Interpret the data • Five factors for benchmarking the usability of an interface (Shneiderman and Plaisant, 2004) • Time to learn • Speed of performance • Rate of errors • Retention over time • Subjective satisfaction
    21. 21. Usability Testing: Interpret Data  Interpret the data • Prioritize severity of problems • Severity ratings (Zimmerman and Akerelrea, 2004) • Time required to complete task • Number of users who encountered problem • Negative impact on users’ perception of the product • Difficult if 70% of users cannot perform task • Error criticality = Severity + Probability of Occurrence (Rubin, 1994)
    22. 22. Usability Testing: Interpret Data  Usable Web site: (Rubin, 1994) • Usefulness • Establish whether it does what the user needs it to do • Effectiveness • Ease of use to achieve the desired task • Learnability • Ease of learning application and moving from being a novice to a skilled user • User satisfaction • User’s attitude about the site—how enjoyable it is to use
    23. 23. Usability Testing: Report Results  Executive summary  Report • Describe methodology • Who, what, when, where, and how • Describe how tests were conducted • Profile users and describe sampling • Detail data collection methods • Succinctly explain the analysis • Provide screen captures • Include tables and graphs • Provide examples • Identify strengths and weaknesses • Recommend improvements
    24. 24. Usability Testing: Making the Data Work     Read report Determine what worked and what did not work Redesign product/system based upon findings May be necessary to conduct another usability test
    25. 25. Usability Testing: Limitations  Two major limitations (Wheat) • Reliability • Testing of users who are not atypical users • Individual variation within the test population • Validity • Test tasks, scenarios of the search processes, and testing environment are not accurate • Results not generalizable to the entire user population • Testing is always artificial (Rubin, 1994, p.27)
    26. 26. OCLC WorldMapTM  Research prototype • Test geographical representation of WorldCat holdings • By country and date of publication • For library collection assessment and comparison • Complement the AAU/ARL Global Resources Network project • Geographically represent library statistical data from UNESCO, ARL, Bowker, and others • Number of libraries by type • Expenditures by library type • Number of volumes and titles • Number of librarians • Number of users
    27. 27. Usability Testing: OCLC WorldMapTM  Review sample handouts • Screening questionnaire • Task list • Posttest questionnaire • Executive summary
    28. 28. Usability Testing: OCLC WorldMapTM     Conducted informal usability tests Currently redesigning the interface Conduct second group of formal usability tests Make revisions prior to making publicly available
    29. 29. Questions and Discussion