NCSU Libraries Usability Testing

Uploaded on


  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
No Downloads


Total Views
On Slideshare
From Embeds
Number of Embeds



Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

    No notes for slide
  • Several participants rated rated their comfort level on a 1-10 scale, all rating themselves at 7-8.We see these same “most frequent uses” play out over both the tests we did.
  • Menus were learnable: a number of users scanned menus to orient themselves to options before making selections, and went back to easily find the correct links.Find seems to resonate as a broader term than Search for this user group. Many started there when asked to find almost anything. Users seemed to quickly develop a mental model for “Find” and moved on to explore other menus.Services worked well as a catch-all. Many users seemed to understand the concept of Services.
  • Users seemed to quickly adapt a mental model for Find.
  • In general, users seemed to understand the nature of the Services category. Some users started here for things that were on the Research Help menu, but quickly adapted their understanding of the Services menu.
  • Task 9: You’ve written a paper but need to make sure the citations are correctly formatted. How would you locate the tool that can help you with this?While users in this test did not identify the Citation Builder label, we scored the task as a success if they identified the Citation Tools tab because Citation Builder would be listed prominently on the target page.However, because we know from user interviews and search logs that Citation Builder has strong name recognition, we recommend retaining the trigger words in the sub-link on this menu.
  • There were only two tasks that required use of the About menu because it was assumed that navigation to core applications and functions on the Web site are included on the other three menus, so we focused our testing efforts there.We will be doing a card sort to further test menu composition including how users group items within menus.
  • Task #8: 8 of 10 participants selected the About menu for locating maps, but only 1 located the View Library Maps link in the bottom section of the menu. --Links in the bottom section of the menus may not be visible. --Most people went to D.Hill library for maps. May indicate that users expect to find wayfinding information organized by location.--This could be a design issue; should test again after design is applied; test across multiple menus. If this is still a problem, consider incorporating these links into the menu lists.Task #10: --4/8 of the participants correctlyidentified the Find  Databases link.--This confusion among remaining participantsabout how to locate databases/article search tools highlights the perpetual problem that libraries have in describing and providing access to article databases. We found related problems in our search testing.
  • Task #11: We were looking for Tripsaver name recognition here. -- 4 of 9 participants found Tripsaver readily, though 1 user indicated that he would normally get to Tripsaver via the catalog.-- Of the remaining 5: 1 chose Find  Course Reserves 4 remaining users went to Services  Borrow, Renew, Request--8 of 9 users went to the Services menu to find Interlibrary loan. --Tripsaver should be featured on the Borrow, Renew, Request target page.Task #13:6 of 8 users did not see the Tripsaver link in the 2nd column of the Find menu -- likely because the trigger word Tripsaver was not in the task.Note that 8 of 9 users went to the Services menu
  • Task #14: Only 2 participants selected Research Help  Course Tools for this task. Users seemed to expect to find all course-related information under Course Reserves.Promote link to Library Tools for Courses on Course Reserves target page. Continue to evaluate placement and labeling for Library Tools for Courses
  • Task #11: We were looking for Tripsaver name recognition here. -- 4 of 9 participants found Tripsaver readily, though 1 user indicated that he would normally get to Tripsaver via the catalog.-- Of the remaining 5: 1 chose Find  Course Reserves 4 remaining users went to Services  Borrow, Renew, RequestTask #14: Only 2 participants selected Research Help  Course Tools for this task.
  • Carefully consider—and if possible, test specific links or labels--before prominently featuring links to services that are primarily used in the physical space.Most participants focused on the “help” aspect of the Research Help label, but a couple expected to find research tools or databases here. This label has been changed to Get Help in later iterations to clarify the intent of the menu.More testing will be needed to make sure that the Citation Tools and Course Tools labels still work on a Get Help menu.
  • Tripsaver link – in determining how prominently to feature Tripsaver, consider how often users might look for Tripsaver outside the context of catalog searches. Our interview research indicated good name recognition for Tripsaver, but most folks found it at the point of need during catalog searches.
  • What were our research questions starting out? What were our assumptions?Some assumptions:Users would stay on the ‘All’ tabUsers would switch between tabs in the search results
  • What models were we testing?Model 1: Tabbed Search Box Interface with Tabbed Search Results where the user could switch between search results and the search would re-execute within the target siloModel 3: Tabbed Search Box Interface with Non-Tabbed Search Results where the user could remain in a silo. To re-execute the search in a different silo, the user would need to click on the ‘Home’ button or the browser back button to return to the Tabbed Search Box Interface
  • We had a high rate of response doing guerilla testing in the lobby. 14 participants performed 46 tasks using Model 1 and 14 participants performed 38 tasks using Model 3.
  • Who were the participants?Over 19 departments in 3 overall fields were represented in a pool of 28 participants.
  • What did our users use on the current Web site? Research activities – finding journal articles and books (17/28) Service based activities – booking study rooms, borrowing and renewing laptops (10/28)Course Reserves (2/28)
  • We ask users if you could improve one thing about the web sit what would it be?Reducing clutter was a common responseProviding better access to room reservations system and laptop borrowing and renewalTwo participants mentioned advanced search (a way to get to advanced search from the homepage)
  • We segmented the results into 3 high-level observations and then outlined some of the problem areas in the testing. The first observation was that in well over half of the tasks in both search models, participants selected a search tab before beginning their search task. As we’ll see later on, this did not necessarily predict success in the task. In 60 tasks, user selected a tab before searching; in only 24 tasks did users stay on the ‘All’ tabUsers tended to stay within silos – very rarely did they search in both ‘All’ and a silo
  • In over half of the tasks where users selected a tab, they selected the tab that was appropriate for that search
  • What we don’t know is if they refined their searches in the silo, were they expecting that their search was executed in the same silo This gets at whether or not the interface is learnable. Did the user not understand (in the case of Model 1) that the search would carry through to the various silos?
  • What worked for our users?For books, journal titles and services tasks, most users went to the correct tab or searched in all.i.e., users used the ‘Web site’ and ‘Books & Media’ tabs appropriatelyTrigger words in the questions directed people towards the correct tabAgain, selected the correct tab did not predict whether the user was successful in their search task
  • Journals articles and databases questions were problematic.In many cases for the database question, users mentioned that they were looking for a database tab or link. They understood the term databases, but were confused about where to go to search for a database in either Model 1 or Model 2. Users were looking for the trigger word databases.
  • The following chart outlines task difficulty across models, across tasksFor the DVD question, results could be skewed as several participants commented that they would use the facets in the catalog to narrow by format type. Almost all users scanned results for format ‘DVD’ rather than typing ‘DVD’ into their search. The photocopy question (#3) scored low in model 3 due to the fact that users scanned results so quickly that they often missed the link to photocopies – Also, if they were using the Web site search, best bets were not part of this.In many cases, the correct tab was chosen, but either the search term was incorrectly constructed or users scanned results so quickly that they ended up failing the task
  • Tasks difficulty chartedSuccess rates for tasks 6, 7 & 9 were quite different between the 2 models. Data from each of those tests are below. In looking back at observations from the study itself, search terms were quite varied between the 2 models for these questions. For question 6, general confusion about databases led to failed searches. For tasks 7&9 users tended to scan search results so quickly that they quite often missed the correct answer in the search results. Task 6: Web of Science databaseModel 1: Fail, Fail,Fail,Hard, EasyAll, Articles, Articles, Articles,Library WebsiteModel 3: Hard, Easy, Fail, FailAll, All, Articles, JournalsTask 7: Lessig Article. Model 1:Med, Fail, Fail, FailArticles, All, Journals, JournalsModel 3: Med, Easy, Fail, EasyArticles, Articles, Journals, ArticlesTask 9: Borrow a Laptop. Model 1:Hard, Med, Easy, FailWebsite, Website, Website, BooksModel 3:Easy, Easy, EasyAll, Website, Website
  • We also segmented the data by user status. In the case of undergraduates, their average task difficulty rating was 2.5.
  • Issues in the test itself: scenarios were too difficult to read aloud in a guerrilla usability testing setting. Tasks should be shorter and more precise.
  • Journals / journal titles will appeal to faculty, a group we did not test in this usability testDatabases will be visibly highlighted on the new homepage design.
  • There’s nothing to indicate that search tabs hindered the users’ search. Build 2 prototypes for the new site – one with a tabbed search box and one with a single search box and links to silos. Conduct a second usability test to determine effectiveness.


  • 1. NCSU Libraries Usability Testing
    April, 2010
    Angie Ballard & Susan Teague Rector
  • 2. Navigation Testing
  • 3. Research question
    Do users navigate using the expected menu items?
  • 4. Background
    Users were recruited in situ in the D.H. Hill Library lobby and asked to complete 4 of 15 tasks using a working prototype of navigation menus only.
    Each task required the user to open a navigation menu and indicate which menu item they would select to look for the specified information.
    Facilitators recorded up to 4 of each user’s menu selections, in order of selection.
    A facilitator assessment of task difficulty was also recorded.
  • 5. Who were the participants?
    24 Undergraduates
    3 Graduates
    2 Staff
    1 Visitor
    2 Library Staff Members
    • Management
    • 6. Agri-Business
    Science & Engineering
    • Biology
    • 7. Textiles
    • 8. Computer Networking
    • 9. Environmental Engineering
    • 10. Environmental Technology
    • 11. Biochemistry
    Humanities/Social Sciences
  • 17. Use of the Web site
    Nearly all participants described themselves as “pretty comfortable” or “very comfortable” with the current site.
    Users reported that they used the current site most frequently for:
    Research (finding articles/books)
    Reserving Rooms
    Reserving/Renewing laptops
    Course Reserves
    Most users expressed general satisfaction with the current site or described positive experiences.
    Three users said the current site seems cluttered.
  • 19. Observations
    Menus were learnable
    Find resonated as a broader term than Search
    Services menu worked well as a catch-all
    Research Help was the most ambiguous
    About was generally used as expected
  • 20. What worked well: Find
    Find was very learnable
    Books & Media label
    Users readily used this to search for DVDs
  • 21. What worked well: Services
    for locating photocopy pricing
    Digital Media Lab
    DML had good name and function recognition.
  • 22. What worked well: Research Help
    Citation Tools label
    Users easily found the citation tools here, but few saw the Citation Builder sub-link.
  • 23. What worked well: About
    Users did not hesitate to look for hours information on the About menu.
  • 24. Problem Areas
  • 25. Problem Areas
  • 26. Problem Areas
  • 27. Task Difficulty
  • 28. Other Observations
    Some users insisted that would not use the website to complete certain tasks.
    Photocopy cost; flash drive loans; ask a librarian; info on video editing
    When asked what they would expect to see under a Research Help menu (n=6)
    4 mentioned chat, tutorials, citation help, and/or research guides
    2 expected to find links to resources or search tools.
  • 29. Recommendations
    Feature a Tripsaver link on the Borrow, Renew, Request target page.
    Make access to Databases prominent on the homepage for those users who identify the term.
    Change Research Help label to Get Help to better reflect the intent of the menu.
    Test to see if Citation Tools & Course Tools still work
  • 30. Recommendations
    Promote link to Library Tools for Courses on Course Reserves target page.
    Continue to evaluate placement and labeling for Library Tools for Courses
    Organize maps/wayfinding information by location.
    Include maps links on target pages for specific libraries/locations.
  • 31. Recommendations
    Make links to these top tasks easy to find on the homepage:
    Reserve a Study Room
    Renew laptop
    Course Reserves
  • 32. Search Testing
  • 33. Research Questions
    Do users pre-select tabs in a tabbed search box before entering search terms?
    How do users interact with tabbed or non-tabbed search results?
  • 34. Search Models
    Model 1
    Tabbed Search Results
    Model 3
    Non-Tabbed Search Results
  • 35. Background
    28 undergraduates, graduates and library staff, participated in a round of guerrilla usability testing for 2 proposed search models
    Participants were recruited from the pool of patrons that walked into the lobby of DH Hill
    Each participant was asked to complete 2 tasks using one of the search models; many participants volunteered to answer more than 2 questions
    14 participants performed 46 tasks using Model 1
    14 participants performed 38 tasks using Model 3
  • 36. Who Were the Participants?
    • Science/Engineering:
    • 37. Aerospace Engineering
    • 38. Animal Science/Biology
    • 39. Biology
    • 40. Biomedical Engineering
    • 41. Chemistry
    • 42. Civil Engineering
    • 43. Computer Science
    • 44. Electrical Engineering
    • 45. Food Science
    • 46. Human Biology/Nutritional Science
    • 47. Integrated Management & Systems
    • 48. Meteorology
    • 49. Textiles Engineering
    22 Undergraduates
    3 Graduates
    2 Library Staff Members
    1 Non-Traditional Student
  • 56. Use of the Current Web site
    Finding Journal Articles & Books
    Booking Study Rooms
    Laptop Borrowing & Renewal
    Course Reserves
    Highlight reservations (room, laptop) and renewal options
    Make the site less cluttered
    Provide advanced search
  • 58. Observations
    • In both search models, in well over half the tasks, participants pre-selected a search tab before beginning their search.
  • 59. Observations
    Across both search models, in 45 of 60 tasks in which users selected a tab, they selected the appropriate tab for that task.
  • 60. Observations
    In both models, users rarely switched silos in the search results once in a silo. Users tended to stay in the same silo and refine searches within those silos.
  • 62. Problem Areas
  • 63. Task Difficulty
  • 64. Task Difficulty
  • 65. Task Difficulty by Status
  • 66. Other Issues
    Almost all participants only scanned the first page of search results for a quick answer to the task
    Even if users executed a successful search, they often did not scroll down far enough in the search results to find the answer
    Spelling was problematic
  • 67. Recommendations
    Although there was confusion with the Journals and Articles tabs, consider keeping both tabs to accommodate both novice and advanced users. Consider changing the ‘Journals’ tab label to ‘Journal Titles.’
    Look to the redesign team to better highlight an option for databases on the homepage. Do not add a tab for databases as this could further highlight issues between databases, articles & journals.
  • 68. Recommendations
    Consider implementing a tabbed search box in the new design.
    Conduct more usability testing in the context of the new site design.
  • 69. Next Steps
    Take recommendations from usability testing and incorporate into homepage wireframes
    Build 2 search prototypes in the context of the new Web site homepage – one with a tabbed interface and one with a single search box with links to silos.
    Conduct more usability testing on the working homepage prototypes.
  • 70. Thank you!Questions?