Usability Testing


Published on

In this presentation, I will explore how usability testing has been used to spark changes in the design of the Northwest Justice Project’s new SharePoint content management system. Usability testing has become a popular user research method for assessing different aspects of a product, most commonly websites. By conducting usability tests early and often in your design process, many problems with the product can be targeted and resolved before a final design is released. Jackie Holmes will present on the basics of usability testing such as how to prepare tasks, what to look for during the test, how to analyze your observations, and how test results can influence design.

Published in: Technology, Design
1 Like
  • Be the first to comment

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide
  • Welcome to this webinar on usability testingAfter today, hope you come away seeing the value of usability testing and its purpose
  • Introduction to usability testingCover the basics of this evaluation techniqueWhat usability testing is and why it is importantHow to prepare for and conduct your own tests as well as how to analyze your test results and see how they may influence changes in your productConducted several usability tests over the past few weeks here at NJP -- throughout presentation, I’ll share examples of what I’ve done to show how I’ve done usability testing and what we’ve learned from it.
  • Before we jump in, give you a little background on myselfJackie Holmes on an unusually sunny day in Seattle last yearWorking at Northwest Justice Project in Seattle as an information architecture/user experience summer intern -- role as a bridge between users and developers. Creating new NJP information management system using Microsoft SharePoint Meant to share resources and information in one central locationGoal is to increase the effectiveness and efficiency of client servicesTime spent interviewing and shadowing users to discover their goals, tasks, needs and frustrationsWith this knowledge, I’m able to create task flow diagrams and draft prototypes (aka wireframes) to influence how the site can be used by its usersI then use these materials to talk with the developers on designing a SharePoint site that will satisfy the users’ needsSo, in getting back to my internship title, I architect how information will best be used on the SharePoint system and involve the users throughout the development process to understand how to provide them an enjoyable and useful user experience (often abbreviated UX)I am currently a graduate student at the University of Washington in Seattle.Pursuing MS in Information Management from the UW’s Information School.I also have a BS in psychology from UW which helps since user experience research and design ties closely behavioral science, human factors, and cognitive psychology.
  • So what is usability? Dr. Jakob Nielsen, widely regarded as the father of modern website usability, defines usability as this“Usability is a quality attributethat assesses how easy user interfaces are to use. The word "usability" also refers to methods for improving ease-of-use during the design process.”Big part is trying to design systems that are easy for the user to interact with
  • Five components:Learnability: How easy is it for users to accomplish basic tasks the first time they encounter the design?Efficiency: Once users have learned the design, how quickly can they perform tasks?Memorability: When users return to the design after a period of not using it, how easily can they reestablish proficiency?Errors: How many errors do users make, how severe are these errors, and how easily can they recover from the errors?Satisfaction: How pleasant is it to use the design?
  • A usability test is a research technique for evaluating the usability of a product (commonly a website)You use actual users during tasks they would actually want to use for the product
  • Usability testing is important because itAllows us to get user feedback and determine satisfactionCatch and fix problems before launchPromotes user-friendliness by hearing from the users and designing around them
  • If you’ve seen my blog post series on usability testing, you’re familiar with this cycle graphic I created on the steps of usability testing[talk about cycle steps]Cycle -- because usability testing hardly ends with one round, it is an ITERATIVE processYou’re likely to make changes based on the results and then you want to go through the usability testing cycle again to see if usability has improved
  • You’ll need a few things to prepare for your usability tests:A prototype of the product. It should function well enough for a user to complete the tasks you give them. This is a screenshot of our SharePoint site – IKE, Information, Knowledge, Etc.Identify users/testers to actually do tests – you can’t have a usability test without them! Through research of his own, Dr. Nielsen suggests 5 is sufficient for finding all usability problemsSet up time to meet and shouldn’t take more than an hour to perform the test.
  • 3) Tasks – what are you testing? These should be realistic tasks the user would need to use the product for.4) Need recording devices for taking notes and observations during the testPen and notepad are just fine, but you may want to consider using an audio recorder and/or video camera to capture information that you can come back to later for review. Also, what do you want to record and remember from the test? Consider metrics, like number of minutes to complete tasks, user comments, etc. Helps you organize notes during the test.Wouldn’t recommend a laptop to record notes. Even though it’s easier than a pen and paper, the sound of the typing may be distracting to the user.
  • Here at NJP…Had IKE prototypeFelt it functioned to help users achieve their goal of sharing information resources quickly, efficiently, and effectively.Got 8 people at NJP to be testers (helped to have a liaison in the organization to help connect me with people) Testers were representative of those that would actually use the systemFrom interviews, it sounded like the legal assistants would be some of the main users of the site since they are uploading and retrieving documents on a daily basis, so we tested with more of them.
  • Tests will likely focus on a particular area of your productFor us, it was the Library section of IKE, since that was the central place for sharing documents and other resources.We were primarily looking at three pieces of the Library:Ease of uploading documentsEase and learnability of tagging (adding descriptive keywords to documents)Ease and satisfaction of the various browsing and search functions
  • To get at these functions in the Library, our tasks were:1) Upload a document you would like to share with your colleagues to the IKE Library.2) Think of a case or project you recently worked on that required you to look for a document in the network drives, ARC, and/or Intranet. Show how you would look for such a documenton IKE.
  • Most of our testers had never seen IKE so some of the usability factors were more relevant than others to measure for this first round…
  • Thus, we focused on the learning more about the learnability, errors, and satisfaction usability factors of IKE.
  • So what do you do during the actual test?It’s a good idea to run the test in the environment the user will actually use the product in since you’re trying to make the experience as close to reality as possible. However, tests can also be conducted in a lab, empty conference room, or even remotely, as long as you have the software that allows you to talk to the user and see what they are doing with the product.Introduce testTesting product, not userUser to ‘talk aloud’ – so it easier to take notesRead task to userAsk tester to tell you when they think they’ve completed the taskObserve and take notes
  • Stand or sit next to testerIf you can, try to have two moderatorsMore and less note-takingAfter completion, asked tester:What did you like/dislike about the task?Did anything make the task easy? Difficult?Is there anything you would change? Gauges satisfaction
  • After the tests, you’ll have a bunch of data. It may be Quantitative – numbers, averages, # of clicks, # of minutes to completeAnd it can also be Qualitative – user comments, questionsIt can be daunting to know what to do with all the data, especially the qualitative. So, from here, I will explain one way you can organize and analyze all the qualitative data.
  • To help you figure out what the qualitative data is saying, you can make an Affinity diagram This technique helps you discover themes in your data.You begin by writing all of your qualitative data on individual sticky notes
  • You then stick the notes to a blank wall and start arranging them into categories.This exercise allows you to easily see all the data together and move it around intoclusters based on common themes.
  • Here is how the affinity diagram for NJP turned outYou can see one of our categories was based on issues that could possibly be solved with more instructions on IKE or given trainings.
  • Early on we realized most comments had to do with problems people had and suggestions they had for improvements. This was not surprising since we had not run any tests with actual users beforehand. Thus, it made sense to cluster the data into different types of issues or problems.Here you can see what clusters/themes arose from the usability tests.Issues concerning naming were the most frequent issues – such as people didn’t understand what a term such as metadata meant or felt certain labels could be named differentlyVisibility issues were second and were such things as not being able to see or find a button.
  • After seeing your results and issues that may have come out of them, you’ll want to rate the severity of the issue so you know what should take priority for fixingWe looked at each of our issues and rated it based onHow frequently it was encounteredAnd how much it could affect priority tasks to be completed on IKE (such as uploading a document or finding a doc)
  • As an example, here are the individual visibility issues that came out of our testsCommon comment -- users alerted us that they couldn’t see the search box when the Library Tools tab was open. That’s because Library Tools covered up both the search box and links to other parts of IKE. This was a severe problem that needed to be fixed bcEliminated a major search/browse function on IKE And thus negatively affected of the major goals of IKE: to efficiently and effectively find informationOur usability results also showed us that 7/8 of the users tested went straight to the search box when looking for a document.Another issue was that people didn’t see the Library Tools tab – this was rated as moderately/severe because although the upload button is in this tab, there are additional ways to upload a document that are always present and many of the other features that appear in Library Tools will probably not be used by most users.Furthermore, a less frequently commented problem was about features not standing out on IKE. Even though it wasn’t brought up that often, I still rated it as a severe problem since we want features to be easily found and attract your eye.
  • This is another view of where many visibility issues occurred.
  • The results of a usability test can have a big impact on the design on your product and can alert you to making changes.Often you discover where and what changes should be made
  • So coming back to the severe problem of not being able to see the search box under Library Tools ribbon…
  • We decided to fix this problem by making the search and additional links appear at all times even when the Library Tools tab is openImproved both search visibility and navigation
  • I bring us back to the process cycle again to reiterate that usability testing usually happens more than once. After we make changes to IKE based on the user feedback that came out of the first round, we will do another round of usability tests to see if and, if so, how usability has improved on IKE.
  • Usability Testing

    1. 1. Usability TestingJackie HolmesAugust 30, 2012
    2. 2. What will be covered• The What? and Why? of usability testing• How to prepare, conduct, and analyze a test• Examples of usability testing at NJP
    3. 3. A bit about me
    4. 4. What is usability? Usability is a assesses how easy user interfaces are to use. The word "usability" also refers to Usability is a quality attribute that methods for improving ease-of-use quality attribute that assesses how during the design process. easy user interfaces are to use. The word "usability" also refers to methods for improving ease-of- use during the design process. Source: Dr. Jakob Nielsen
    5. 5. What is usability?Five components:• Learnability• Efficiency• Memorability• Errors• SatisfactionSource:
    6. 6. What is usability testing? evaluating the Research technique forusability of a product using real users doing real tasks
    7. 7. Why is it important?Get user feedback and Catch problems Promotesdetermine satisfaction before launch user-friendliness
    8. 8. How do you get started?
    9. 9. The Process
    10. 10. Preparation Product Prototype Users/Testers
    11. 11. Preparation Tasks/What are you Recording Devices testing?
    12. 12. Preparation at NJP Attorney 1 Legal Assistants CLEAR 4 Attorneys 3 IKE Site Prototype Users/Testers
    13. 13. Preparation at NJP• What was tested? IKE Library Browse & SearchDocument Uploads Tagging Functions
    14. 14. Preparation at NJPTasks:1) Upload a document you would like toshare with your colleagues to the IKE Library.2) Think of a case or project you recently workedon that required you to look for a document in thenetwork drives, ARC, and/or Intranet. Show howyou would look for such a document onIKE.
    15. 15. Preparation at NJPLearnabilityEfficiencyMemorabilityErrorsSatisfaction
    16. 16. Preparation at NJPLearnabilityEfficiencyMemorabilityErrorsSatisfaction
    17. 17. Test Administration* Introduce test * Testing product, not user * User to ‘talk aloud’* Read task to user * Ask tester to tell you when they think they’ve completed the task* Observe and take notes
    18. 18. Test Administration* Stand or sit next to tester* Try to have two moderators * More and less note- taking* After test, ask about impressions
    19. 19. Analyzing Results Quantitative Data Qualitative Data
    20. 20. Analyzing Results w/ Affinity Diagram 1. Write down all qualitative data (comments, que stions, etc.) on sticky notes
    21. 21. Analyzing Results w/ Affinity Diagram 2. Arrange notes into clusters on wall based on common themes
    22. 22. Analyzing Results w/ Affinity Diagram
    23. 23. Analyzing Results
    24. 24. Analyzing Results Rate severity of issues raised * How frequently issue was encountered * How much it could affect priority tasks to be done in IKE
    25. 25. Analyzing Results Visibility Issues 9 8 7# OF COMMENTS 6 5 4 3 2 1 0 Cant see search Cant find Did not see Did not see Key Did not see Features are Want to see Dont want box under Upload/Add Library Tools Filters box expansion muted on thumbnail of training Library Tools document tabs triangle for term IKE, nothing uploaded file in materials ribbons buttons sets stands out dialog box, so hidden like on you know the intranet upload was successful Severe Moderate/ ISSUE Severe Severe
    26. 26. Analyzing Results
    27. 27. Influence on Design Discover where and what changes should be made
    28. 28. Influence on Design of IKE Visibility Issues 9 8 7# OF COMMENTS 6 5 4 3 2 1 0 Cant see search Cant find Did not see Did not see Key Did not see Features are Want to see Dont want box under Upload/Add Library Tools Filters box expansion muted on thumbnail of training Library Tools document tabs triangle for term IKE, nothing uploaded file in materials ribbons buttons sets stands out dialog box, so hidden like on you know the intranet upload was successful Severe ISSUE
    29. 29. Influence on Design of IKE
    30. 30. The Process
    31. 31. Thank YouThis call will be archived here: