Going to talk aboutWhy and how we did our usability testingWhat we found out: how successful were are testers and what tripped them upA couple of small changes we made to our Primo interfaceMuse briefly on how what we saw in the lab compares to usage in the wild
We spent a considerable amount of time determining what should show in the brief results, the details tab links, what fields should be searched, etc. Needed a check on our work before launch. Had already held workshops with Libraries staff, but only so much to learn from them…Fix major problems:Jacob Nielsen: use a small number of users to find the most glaring problemsNot all problems fixable: Primo is highly customizable, but it is not something that we built and control fullyFocused tasks on basic library searching scenariosExamples of new features:Including catalog, local image collection, and IR along with article resultsFacet interface new in this contextFRBR results (“View all versions”) for catalogPrimo-specific features like email and e-shelf; the item tabsChance to engage users for feedback without waiting for them to come to us (Feedback form…) In terms of immediate changes we wanted to focus on specifics, but also useful to think broadly about discovery and where it fits in our services.Reference and instruction staff are in some senses Primo support staff (and reluctant ones at that). Wanted to be able to help them help users
Task-based testing:Ask users to complete specific tasks, record success or failure and time to completionThink aloud protocol – encourage users to talk out load about why they’re doing what they’re doingAccosting students worked best to get participantsSign-up schedule difficult to manage, students unreliable busyUsed KU water bottles as incentives; just did another round and found the dining gift cards work better (though required more paper work)
Other category are language institute studentsWe had 16 different majors:Humanities, English, Museum Studies, Linguistics, Communication, Global & Intl. Studies, Science/technology, Molecular Bioscience, Electrical engineering, Engineering, Pharmacy, Chemistry, Social Sciences, Social welfare (2), Economics, Undecided & AECWe asked if participants had tried our new search tool, 4 said yes (25%) and 12 said no (75%).
Process:Explained what to expect and encouraged students to verbally explain what they were doing/why they were doing itTasks were read aloud to the studentStudents given a written copy of the tasks to refer toOccasionally prompted participants if they looked stuck, weren’t verbalizing, etc.Do the KU Libraries hold a physical copy of the book, James Joyce and the politics of egoism by Jean-Michel Rabaté, published in 2001. Where would you locate it?Wanted to force users to confront FRBR’d results and see what happenedPlease find an electronic version of the book, Scarlet letter, by Nathaniel HawthorneInterest in online textsSearch for this scholarly journal article, “On the tragedy of love in The Scarlet Letter”. Published in the journal, Studies in Literature and Language. 2011Mark and save the citation to the e-shelfEmail the citation to yourselfPull up the articleWanted to test basic search for an article, but also interested in Primo-specific featuresSearch for an online, peer-reviewed article published in the last 5 years on the topic: U.S. foreign policy and immigration from MexicoTell us about an assignment in the last year where you needed sources. Use this search to find 2 resources that you think would have been appropriate
Morae Recorder (from TechSmith)Captured screen text, keystrokes, mouse clicksWeb camera with microphonePicture in picture and soundMorae ManagerSearch across many recordings to uncover patterns and trends in the dataFind and view the exact moment when participants clicked a button, typed something, etc.Reviewed video and logs to add success/failure and usage of features we were interested in Exported data to analyze further using Excel and R
Conducted verbal follow-up after the task. Allowed for a discussion for user to reflect on their experience.Overall impression and How likely asked as “1-5 scale” questions.
Overall, our users were able to complete basic library search tasks in Primo.Success means that the user achieved the desired result. 1-3 are pretty objective – we know what they should have found.For #3 finding the article was good enough. People struggled to find the email and e-shelf features.We let users determine whether or they were able to find a suitable article in #4Failure was went a user gave up or said they had completed the task but had found the wrong resourceWhere’s #5?We found that our unstructured prompt was interpreted vary differently by different user. It became very difficult to judge success or failure.We were able to observe some additional behavior, but the success/fail data isn’t very reportable
More than half of our testers completed all tasks successfully.Only a couple completed fewer than half of the tasks.
Another common issue found during testing wasn’t a Primo problem, it was a problem with our Link Resolver page. [click] Students are used to the blue text indicating a link so they click on the Journal, Resource or Edit Citation links rather than our big blue Article button.The order of the resources on the link resolved matters. Some are more intuitive than otherIf the student was able to get to the article from this page, they usually chose the top link [click] which in this case is for Literature Online which has it own set of issues.
Students had trouble finding the link to the full text once they were on the LION result page. The key in the center of the page[click] tells you what each image next to your result means, for example [click] this icon means Full Text. Students attempted to click on the full text icon in the key section or next to the number 1 instead of just clicking on the title.We adjusted our knowledge base so that LION appears as one of the last choices when there are multiple databases that contains the article.
The task was to find the scholarly journal article, “On the tragedy of love in The Scarlet Letter”. Published in the journal, Studies in Literature and Language. 2011This example shows a student who has narrowed their search before starting, a technique we’ve taught to users for many years to help them find what they’re looking for. This search failed due to the Type being set to Journals instead of Articles but[click] in Primo the would have saved time and avoided the confusion if they’d just typed in the title in the basic search box Though note here that the “(Report)” at the end of the title gave users considerable pause. We observed that Title was by far the most important thing users looked at. It was rare to see someone look at Details or anything else. So metadata matters, but mostly in the title.
Added Tip for Boolean searchesJury is still outHard to distinguish when terms aren’t intended as boolean operatorsNot providing the suggestions if DYM options are presentNot providing suggestions if there’s a search on a whole phrase search, or if punctuation is present (colons are good to get rid of titles)Thinking about using a max number of terms or looking at the ratio of boolean to non-boolean terms.
Other changes we made due to testing:Changed the order of the facets, moving Format to the top instead of TopicRemoved the “More options” under date – very confusing for users
Compares feature use per task in our tests with feature use per visit from our Google Analytics for Primo.Big Caveat about searches – most of our Primo searches start from outside Primo as dlSearch.do searches from the homepage. Those aren’t included here, only subsequent searches. So searches are probably under represented here.
Primo Usability at the University of Kansas
Miloche Kottman - Scott Hanrath
& fix major problems
product, new features
support reference &
Task-based user testing
Participants solicited via social media &
accosting engaging students in the library
◦ 2.5 days of testing
◦ ~ .5 hour per student
Graduate & Prof
Not at all
Known item: physical item
Known item: ebook
Known item: journal article
Open-ended: peer-reviewed article for supplied
Open-ended: any resource deemed appropriate
by user for user topic