Hi, I’m Stephen Francoeur and I’m going to offer an introduction to usability testing today.
About a year ago, I changed jobs here at the library at Baruch College, where I have been working since 1999 as a reference and instruction librarian. I began a new position of user experience librarian so that I could help the library better design and integrate its online services and resources.I’ll talk in a moment in greater depth about what user experience is and how it relates to usability, but for now I’ll mention that over the past year, I’ve done rounds of usability testing on a new library website we’re still working on, on a mobile-friendly webpage that connects users to mobile-friendly databases, and on our new launched instance of Summon.I have a lot interests in librarianship, which you can see if you check out some of the blogs I regularly maintain and on the many social networks where I like to hang out with librarians from all types of libraries across the globe.
Four years ago, I first saw this video made by a librarian who wanted to illustrate something we are all familiar with: our library resources and our library websites are frequently a tangled, confusing mess that send users scurrying about all over the place and offer a less than ideal user experience.http://www.youtube.com/watch?v=tKvR0OC4nYc[play video for audience]
For today’s presentation, I hope that you’ll all be able to end up being able to define what usability is, be able to identify what kinds of things you could test in your own library, be able to run a basic test, and be able to identify the things in the test results that you need to act on first.
Before getting to usability, let’s tackle a broader subject, which is user experience. User experience is a way of thinking about the complete experience that a user has with a system or service.When talking about user experience, it’s important to keep in mind the mindstate of the user, what are their feelings and emotions evoked by the interaction.It’s also a matter of considering the characteristics of the system, such as how complex it is, how usable it is, how learnable it is, etc.Finally, user experience tries to take into account the context of the interaction between the user and the system. Is the user on a mobile device? Is the user a student at the end of the semester looking for sources on an overdue paper?
Usability, according to web design expert Jakob Nielsen, usually includes five aspects:Learnability. In other words, when the user is confronted by a new site, how easily can they figure out what the purpose of the site is and how to use on their own.Efficiency. How quickly can the user perform tasks and navigate on the site.Memorability. If the user returns, what parts and functions of the site are recognizable and don’t need to be relearned. Obviously, you don’t want your site to be hard even for your repeat visitors.Errors. To what extent do features and functions not work as predicted by the user or as designed by the developer.Satisfaction. How happy is the user when they are using your site.
I should note there that one of my greatest sources of inspiration about what usability is all about comes from this book by Steve Krug. Although it’s aimed at people doing web development in a corporate setting, the techniques are almost all spot on for the library world as well.
Usability tests are really not such a big deal. Here’s a quick overview of the steps:Come up with a set of 3-5 different tasks that you’ll ask users to perform.Round up some 5-10 volunteers who will act as test participants and then bring them one at a time into a testing area where you’ll observe them as they perform the predetermined tasks.After you’ve observed all the test partipants, you’ll have a pretty good idea of some things that need to be fixed and what things seems to be working OK. After you make the easiest 2-3 fixes, go back and do another round of testing and tweaking, etc.
But why go through all this trouble? Can’t I just see what’s wrong? Can’t I just ask the staff at my library to identify what needs to be fixed and take care of the problems that way?Well, no, not if you really want to make your user the center of the website’s design. Lots of librarians will want to tell you that the library website is for them, too, and that they are experts on what users want and need. That is true up to a point, but as anyone who has sat on a library redesign team knows, everyone has lots of opinions and insights, many of which are completely at odds with each other.The real user of your library’s website is your patron, your customers, your students, your members, whatever you want to call those folks who don’t work in the library. You want to please those folks more than anyone else.By making an honest effort to let your users determine the look and feel of your website—within reason—you’re likely to have a design that will actually “work” for them. By going directly to your users and recording how they actually use the site, you’ll get information that will ground the endless debates over design matters.Usability testing offers a method based on social science research methods.And, as I see again and again when I run usability tests, the results are always surprising. Your test participants are always going to find things that never occurred to your designers. Most usability experts will echo this experience, I believe.
So the first step is to think about what it is you want to test. Think not just of the library website itself but about all the services that are linked to your website or that are built into it. These things fall in three categories:Things you control (in other words, there are very customizable by your library)Things you kind of control (they are moderately customizable)Things you barely or don’t control (you’ve got few options or you have to get someone’s permission to make any changes at all or you’ve got no ability for customization)So I’d like to ask a question of the attendees right now. In the chat window on your screen, please type in the kinds of things you might want to test. What are some categories of web-based tools/services/resources commonly found in libraries that should be tested?
So under the category of things you control, many of us would put the library website.
Library catalogs, A-Z journals lists, LibGuidesor subject guide systems are things that we often have less control over but are worth testing. I’d also thrown in link resolvers, too (like SFX).
Library databases, ebook platforms, and interfaces for digital reference services are often things we have very little or no ability to customize.
OK, so you now have an idea about what service or resource you’re going to test, next you’ll want to think about what actual tasks you want your test participants to do. You’lll want to pick pick tasks that are going to reveal some useful information to you.One obvious place to go looking for tasks are those pages or services that you and your colleagues already know need work, such as your interlibrary loan form or the way that library hours are displayed.Another strategy is to think about what are the most common activities among patrons in your library. Take a look at your site statistics to see what are the most popular pages. Maybe that’s where you want to do your testing.Or maybe you’re about to launch a new page or service. Those are great opportunities for testing.
OK. So the gear you need is not too complicated. You’ll need a computer….a desktop or a laptop will do. Last year, I had test participants use my smarthphone wen I was testing a mobile web site. If you really want to get serious about user-centered design, you may want to do usability testing on paper sketches that precede any actual website coding. This is perfect acceptable and commonly done. It’s a great way to run tests that will help you get a basic page layout and site architecture problems.You’ll also want to install some screen recording software on the computer that your test participants use. That way, you can capture as a movie all the mouse movements, page clicks, and characters typed; this is really rich data to return to when the tests are done and you are trying to write up your report. I’ll talk in a minute about software options.Another option that has worked for me is to simply have a second person on hand helping you with the test. That person’s sole responsibility is to closely observe the test participant and take detailed notes.Finally, if you have screen recording software, you might as well get a USB microphone that can capture the conversation between the test participant and the test facilitator. You’ll want to encourage the participant to think aloud as much as possible as they perform tasks.
Here are some sample tasks that I’ve used for various reasons. You can see they are not huge, multistep projects but somewhat straightforward tasks.
Here are five options for screen recording software. I’ve used CamStudio a lot mostly because it is free and can be installed on any machine. With the others, you’ll get a much richer feature set but will limited by the number of machines you can install it on.
OK, so if you are doing the tests all by your lonesome (not the best situation but certainly still doable), you’ll be in charge of recruiting test participants, running the test, recording the test (you’ll definitely need screen recording software and a mic), and for prepping the test environment.
I am so thrilled that just hours ago a librarian in Canada, Amanda Etches, who is known for her work in usability, posted this wonderful picture today captioned, “Guerrilla testing.” This is what appears to be a staff of one person running usability tests from a library cart in a public area in the library.
If you can get another person to help you out with the testing, you can break up the tasks in rational ways.
Before you run off and do the tests, I recommend drafting a one-page document that details the protocol for your test. The most important part of the protocol is a script that you’ll type out and read from during the tests. It is essential that when you are delivering the task details to your test participants that you say it in the exact same way to each person and that you never give away details about how to do the task. The whole point of the test is to see how much the participant can do on their own without any expert help; this mimics the real world use of your library site. There’s rarely a librarian looking over your patron’s shoulder as he or she navigates your site.
Getting properly set up is key. Make sure your mic works, the screen recording software works, and that you’ve cleared the cache in your browser so that no words entered previously in search boxes will appear as your test participants use those search boxes.
Your script should include an explanation of why you are reading from a script; you don’t want to freak out the participant. You can break the ice before the task performance portion by asking them to talk about websites that they like to use.Make sure you don’t give them a chance to explore your site before doing the tasks. You want to drop them into as much as possible real world scenarios where they have come to the site to do a specific task they had in mind.
It’s essential that you ask the participant to speak aloud so you can hear them express any frustrations or surprises they’ve had.
If you can engineer a reward or payment to your testers, that’s great but not essential.
When the test is done, quickly get yourself to a computer so you can get down any insights you gained. After 2-3 participants, it’s easy to lose track of who did what and what you learned.
Usability Testing Basics
USABILITY TESTINGBASICSStephen FrancoeurUser Experience LibrarianBaruch College (New York, NY)
About Me User experience librarian @ Baruch College Blogger • Beating the Bounds • Digital Reference • Stephen Francoeur’s Commonplace Book On (too) many social networks
ecahoy. “Finding Time in the Penn State Libraries.” YouTube. 2007. Web. 26 Mar. 2012.
Learning Goals Define Identify usability what to test Run a Act on test basic test results
User Experience Mindstate of the user Context of the System interaction characteristicsAdapted from Marc Hassenzahl and Noam Tractinsky, “User Experience - a Research Agenda.” Behaviour & Information Technology 25.2(2006): 91–97.
Learnability Satisfaction Efficiency Usability Errors MemorabilityNielsen, Jakob. “Usabiilty 101: Introduction to Usability.” useit.com. Web. 26 Mar. 2012.
Staff of One Recruits test participants Runs the test Records the Preps test environment test before & after (screen recording software & mic) test
“Guerillatesting!”Photo and caption credit::Amanda Etcheshttp://instagr.am/p/IuHhRri-wS/
Staff of Two #1 Recruits test #2 Observes the participants test Preps test environment Runs the test before & after each test
Draft a Test Protocol Goals Recruitment Tasks Script Staff Roles Equipment
Before Each Test Participant Arrives Clear cache in Check browser screen Check mic recording software
Getting Each Test Participant Started Don’t show website until Chat about first task is websites introduced Explain why they like you’ll be reading from a script
Working Through Tasks Don’t help the Encourage participant participant Stick to to speak the script aloud
As Each Participant Finishes End and save the Give screen participant recording Thank any payment participant or reward profusely
After Test Is Done • ASAP (same day) Make Notes • Identify common problems • Keep it simple Write Report • Focus on fixing just 2-3 things • Might not be same tasksTest Again • Lather, rinse, repeat
Focus on Fixing Easy Things Small changes A website that keeps up with users Iterative development and testing
For More InfoFrancoeur, Stephen. “Usability Testing Our New Website.” Beating the Bounds. 16Jan. 2012. Web. 27 Mar. 2012. linkHassenzahl, Marc and Noam Tractinsky. “User Experience - a Research Agenda.”Behaviour & Information Technology 25.2 (2006): 91–97. linkKrug, Steve. Rocket Surgery Made Easy: The Do-It-Yourself Guide to Finding andFixing Usability Problems. Berkeley, Calif: New Riders, 2010. Print.Nielsen, Jakob. “Usabiilty 101: Introduction to Usability.” useit.com. Web. 26 Mar.2012. linkReidsma, Matthew. “How We Do Usability Testing.” Matthew Reidsma. 15 Nov.2011. Web. 27 Mar. 2012. linkReidsma, Matthew. “Why We Do Usability Testing.” Matthew Reidsma. 25 Oct.2011. Web. 27 Mar. 2012. linkUniversity of Texas at Austin. “Usability Testing.” Web Publishing. University ofTexas at Austin. 27 Mar. 2012. link