● The degree to which a [website] can be used by specified consumers to achieve
quantified objectives with effectiveness, efficiency, and satisfaction in a quantified
context of use.
● How easy is your website to use?
How to reach me:
Going to talk about two usability testing processes in particular today: User testing Card sorting One may sound a little more interesting than the other. “Card sorting” sounds like something librarians did in the 70s...
...or maybe something dealers in Vegas do when they’re just starting out on the job, but they’re actually both pretty neat. This will be an interactive session. I hope to get everyone involved in the card sorting exercise, and I’m going to ask for one or possibly two volunteers for the user-testing. We’ll see how we do with time. Going to use two pieces of software today to demonstrate these processes: Optimal workshop TechSmith Morae The good news for those of you in the non-profit sector (how many of you are there?) is that neither of them will break the bank
First question: hands up. Who here has: Participated in a website user-testing exercise? Get details Participated in a card-sorting exercise? Get details Has anyone actually led a user-testing or card-sorting exercise? Get details
Let’s start with a little background. Wikipedia’s definition of usability: “The degree to which a [website] can be used by specified consumers to achieve quantified objectives with effectiveness, efficiency, and satisfaction in a quantified context of use.” Effective definition of usability: “How easy is your website to use?” Effective definition of usability: “How easy is your website to use?”
Has anyone encountered any usability problems on their site? Have you users ever told you your site was difficult to use? Interestingly, when I look at usability problems, I find a lot of the same issues recur from organization to organization, especially in the non-profit sector. Any ideas what these might be? Language on the site: too complex, or there’s too much of it. Your priorities aren’t the same as your users (i.e. what you want is not what your users want). Trying to be all things to all people. Your site being designed too much or even exclusively for a laptop audience vs. a mobile audience. Let’s get into our first exercise: card sorting. And I’ll start with a piece of advice: have a script ready to go.
(Not those kind of scripts) One that tells your users exactly what to expect: Why you’re doing this. What they’re going to do. How long it will take. Whether they’re going to be recorded. Any other relevant expectations.
Allows you to understand how your users think about your site, and - based on that - what the architecture should look like, in order to emphasize their needs. From that, you’ll also get a good indication of what your navigation should look like. Basically, you turn the relevant pages of your website into cards, and ask the testers to arrange them into the categories that make the most sense to them. You can put the title of each web page on the card, or - if what the page is about isn’t obvious from its title - you can describe what the page does on its card. Will give different results. You’re sort of prejudicing the results a little bit if you describe what the page does vs. just listing the name. One thing that’s important to keep in mind: the page name may require context to understand, i.e. you may need to see it in the context of the navigation in order for it to make sense. In spite of this, the page name does need to stand on its own - without context of navigation - thanks to Google search: there’s no guarantee people will navigate to your page through your interface: they may drop right in, and so not have access to that context. Also, if you do decide to describe the page’s content or function on the card - as opposed to just using the page title - you may not get to see what users think of your existing page names. You won’t get that feedback. So that’s why I like to put the page titles as they exist on the cards, and then go back and see if people understand what the card is about, and allow them to change their categories if their assumptions aren’t correct.
How many participants for usability testing? Whether that’s card sorting or user testing. Has anyone heard of Jakob Nielsen or the Nielsen-Norman group? The acknowledged thought leader on usability dating back to the 90s. He literally wrote the book on usability. It’s the one I used when studying web design around the turn of the century. I’ll mention it and a few others at the end of the presentation. Nielsen famously said you only need to test a design with five users, to uncover 80% of your usability issues. However, usability is mainly an art, not a science. It’s mainly qualitative, not quantitative, and it requires judgment and interpretation. Because of that, it’s often less valuable to achieve statistically significance than it is to listen and use good judgment.
The next question is, how many cards do you need? I’ve read anything between 20 - 50. 50 is a lot. The question you need to ask yourself: do you want to test the whole site or just one section? Most websites have more than 50 pages, so it’s likely not possible for you to test your site with only 50 cards. You may want to test only the most important pages, the top-level navigation, one section at a time, or something like that.
How long do you give them? It depends on how many cards you have. Heard anywhere between 20 - 90 mins. Probably, optimally 30 - 45 mins. You’ll want to use more than 30 minutes if you have a large set of cards, or if - like me - you have other questions to ask your testers.
One last piece of advice: whether in user-testing, card-sorting or any other kind of usability exercise, get people to speak out loud. It’s critical to understand what they’re thinking and the frustrations they’re feeling. As a tester, it’s also kind of strange just sitting staring at someone for 45 minutes to an hour, so you may as well as have something to talk about.
And now for part II: user testing. User-testing is simply a way to give your subjects a series of tasks, and see how easy or difficult those tasks are to complete. By extension, you get to see how easy or difficult your website is to use. You get to see what kind of roadblocks they run into, their frustrations, etc. You’ll want to see how they complete a task, how their assumptions are different from yours, etc. We often become blind to our own websites, and can’t see the forest for the trees. That’s why it’s important to user test your site: it’s the ultimate reality test. Structuring a user-testing exercise: The same considerations with number of users and time as discussed in the card-sorting exercise apply.
Coming up with goals. 6 - 12. I recommend having more than you think you’ll need, because some people are speed demons and will burn through the tasks way faster than you think they will. Goals need to be just like the ones you set for yourself in your annual review: Anyone here have to do an annual review? SMART. Specific Measurable Achievable Relevant Time-bound. So for example “Find out if our site is usable” isn’t a very good goal for a user-testing exercise. Similarly, “find out if our navigation is easy to use” isn’t either. Because how do you go about achieving a goal that general, that non-specific? How do you even know you’ve achieved success with the goal, and how do you measure that success? How are these goals even relevant? And can anyone achieve goals that large and vague in a timely fashion? Here’s where I’m going to throw it open: I want you to think about your own website. Can you think of a goal you’d like to test on your website? Something specific you think may not be usable, and which you’d like to find out whether it is or isn’t? Finally, you’ll want a combination of laptop and mobile tasks: the exact mix of each is up to you, but I’d suggest it reflect your Google analytics. Say ⅓ of your traffic comes from mobile. If that’s the case, you’d want to make ⅓ of the tasks in your user-testing on a mobile device. If you have discrete, separate audiences, you may want to target each of them with a separate round of testing. Not all of the tasks you might want to test are relevant to everyone.
One thing about leading these kind of tests is, you sort of turn into an amateur psychologist or a journalist. The subjects will ask you all kinds of questions, and you sort of have to turn them around on them and say, “I don’t know - what would you do in this situation?” or “This test isn’t about me, it’s about you” or similar evasive responses You don’t want to prejudice their answers. You don’t want to lead them to your conclusions. So you should try to keep the subjects talking (as much as is reasonable - some people just aren’t talkers) but don’t bias the outcome. At the same time, you need to make sure they understand there is no right or wrong answer in user testing. When they’re experiencing frustration it’s not their fault: it means you’re finding out something valuable about your site, and that’s a good thing.
Don’t get overwhelmed. Start small and keep going.
You analyze, and come up with recommendations. Now, this does look analytical, and - despite what I said originally - I wouldn’t recommend shying away from empirical analysis completely. It has its place. If nothing else, it will establish the rigour and professionalism you brought to the exercise, which will help to establish your credibility and authority. It will show the powers-that-be that you’re not just leading them to the conclusion you want: you’re basing it on real information you collected from users.
Once you’ve analyzed the data, you want to figure out how to start fixing things. One way you can do this is with an inventory, or a backlog of tasks. This lists the potential fixes you’ve identified, and prioritizes them by how much of an impact they’ll have and how much effort they’ll take. Steve Krug recommends - while you do the least you can - you don’t shy away from tackling the big fixes. Often people will prioritize the smaller fixes, because they’re easier. But that leads you to no really not tackle what’s wrong with your site. So I’d recommend starting with the most impactful tasks that require the least effort.
And that’s basically it. You keep picking away at that backlog of tasks. Until you do your next round of user testing. Because testing is never, and should never, really be done. Each time you do a new round of usability testing, you learn more about your audience. Each test or card sort or what-have-you is a learning opportunity. And with that, I’ll mention a few more tools here, simply because they’re relevant to usability, even if they don’t exactly fall under the umbrella of user testing.
The next few slides are all from SiteImprove. They produce a variety of user tools, such as the scroll maps you see here. They’re a way to show where a user will scroll to on your page: they use a colour overlay, with the area most scrolled to (i.e. the default that shows up on page load) in yellow, transitioning through orange, red and purple to blue for the least scrolled-to areas. This tells a pretty compelling story about how users will behave on a web page.
These are heat maps that show where users’ eyes go. Now you might wonder how you can track eyeballs. I’ll give you a hint: they don’t use a webcam. They use your mouse, because where your mouse goes tracks to where your eye goes about 86% of the time. So your mouse is a proxy for your eyes. This is useful because it shows what people are paying attention to, which is often not what the powers-that-be think people are or ought to be paying attention to. Again, a reality check.
Here’s the last map: click maps. Probably one of the most important maps, because - as the saying goes - people vote with their clicks. You can get similar information from Google Analytics, but the presentation layer of a click map tells a pretty compelling story. And that’s what you want to do with all of the user data you collect: tell a story. Hopefully, a compelling one. All these maps are useful for showing user behaviour, which is really all usability testing is about. So they’re immensely valuable in and of themselves.
And here’s the last - and possibly one of the most overlooked - parts of usability: can people read and understand your site? This will come out in user testing, but it never hurts to be able to document it across your site, which is why I love this report from SiteImprove.
And that’s it. Here’s some suggested reading if you want to dive further into accessibility. In addition to the books, I highly recommend you check out the Nielsen Norman group. That’s the same Nielsen in the third book, “Prioritizing Web Usability.”