KATE TINWORTH
EXPOSEYOURMU
SEUM LLC
@EXPOSYOURMU
SEUM
EMAIL ME THE PICTURES:
kate@exposeyourmuseum.com
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Baltimore Pratt Library: UCD and Usability Testing Training
Upcoming SlideShare
Loading in …5
×

Baltimore Pratt Library: UCD and Usability Testing Training

640 views

Published on

Workshop given at the Baltimore Pratt Free Library, Spring 2013

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
640
On SlideShare
0
From Embeds
0
Number of Embeds
3
Actions
Shares
0
Downloads
2
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • Bio.How I met Xiaoyu (Webwise).Why I’m here.Why you’re here.[GROUP]EACH PERSONTell me your job at the library, your role with the website redesign (if that’s different), and nletme know a website you think is really awesome and why.GROUPOn a scale of 1-5, how much do you know about this topic?What are you hoping to get out of today?GROUNDRULES:You get out what you put in; should be really fun.We’ll take breaks, but if you need to anytime go ahead.Take notes, tweet, ask questions, etc.
  • [GROUP:How would you define it?]How well users can learn and use a product to achieve their goals. It also refers to how satisfied users are with that process. * Intuitive design: a nearly effortless understanding of the architecture and navigation of the site* Ease of learning: how fast a user who has never seen the user interface before can accomplish basic tasks* Efficiency of use: How fast an experienced user can accomplish tasks* Memorability: after visiting the site, if a user can remember enough to use it effectively in future visits* Error frequency and severity: how often users make errors while using the system, how serious the errors are, and how users recover from the errors* Subjective satisfaction: If the user likes using the system
  • UCD: a design methodology and process that focuses on the: * Needs of end users * Limitations of end users * Preferences of end users * Business objectives
  • No matter what objectives you have, must carefully balance needs of users and needs of organization.Users visit your website to find information or accomplish tasks. If they don't find your website helpful, you risk them leaving.By focusing on the end user you:* Satisfy the user with a more efficient and user-friendly experience* Increase loyalty and return visits* Establish a more relevant and valuable website* Create websites that supports rather than frustrates the userEveryone's a winner!
  • I am going to talk a LOT today, so I thought it would be good to break that up with some videos.https://www.youtube.com/watch?v=dln9xDsmCoY&list=PLf1BzVu9cUoTa-w-3eIPwEon9TiQ9wXX7
  • We are going to focus on two, since you're well into your planning stages and since you are experts in writing content.
  • Before you design and write your website, you should find out how well your current site is working, set measurable usability goals, and learn as much as you can about your users and their tasks.And the cool thing is, this is in your plan! Well done! (Not true for everyone.)Does your site meet your organization’s objectives and usability goals? (No; hence the redesign)Are the goals and objectives still relevant? (Just revisited these)Review or research the types of users who visit your site and the reasons they come.Gathering information from users is essential to understanding: * How well they can find information on your site * How efficiently they can use the functionality on the site * How well they understand your content * How much they enjoy using your siteHelps us not make the same mistakes twice, as well as just keep up with our ever-changing, ever-evolving users…
  • Identify issues your site’s users have by reviewing: * Emails that come from site visitors * Questions from users that call or stop by the desk * Comments or questions left by users on your siteThis is a bit like the exercise you have been doing in your kick-off meetings, right?
  • Look at your Web logs to see how users are using the site.[GROUP] Can any of you tell me: * How many users go beyond the home page? * Which pages of your site are the most popular? * What items are users searching for on the site? * What words are people using as they search?Google analytics is one way to do this. We will revisit this afternoon.Google trends is another interesting way.So it Tagxedo. We can maybe look at all 3 this afternoon.
  • Use a survey to ask users how they use the website, how satisfied they are with different components, etc.These types of questions help you learn how well people can use your site.You can conduct an online survey at any stage of the development process: * Before a site redesign you learn about current users and what they are trying to accomplish * After launching a new or revised site, you can learn if your new design meets the needs of users and identify areas for improvement * Ongoing surveys can allow users to rate or rank the features on your site or provide ideas for future improvements
  • [GROUP- ?s ON SLIDE]Tell me the bad stuff; I can take it!A survey is a structured, standardized questionnaire that your target audience completes. For the web, users typically use things like SurveyMonkey or other web survey software to enter data. The data is stored in a database and survey tool generally provides some level of analysis of the data.Surveys typically have data that can be expressed numerically or through short answers.[GROUP: Is a survey always appropriate?When is/isn’t a survey appropriate?]
  • *Consistent questions*Can reach large numbers*Provides for anonymity (different than confidential-- stronger!)*Relatively inexpensive*Easy (easiER) to analyze
  • *Might not get careful feedback*Wording can bias client’s response*Response rate is often low*Literacy demands
  • [GO TO NEXT]
  • Multiple (maybe even a dozen!) drafts
  • Important to have more than one set of eyes on it; test it out
  • *Is the information already available?*Wording of questions and responses*When and where will it be distributed? How will you get the word out? What sorts of respondents are you after?*How will the returns be managed? Who will look at the data? Who’s in charge?!
  • Self-congratulatory or “pat yourself on the back” surveys are not a good use of your time or your participants!(Jobs v Gates)
  • It’s tempting to come up with a looooooong laundry list of questions.Resist temptation!
  • Think about how THEY will read it.
  • [GROUP: GO THROUGH TOGETHER]
  • We’re going to work on the first one….[ACTIVITY: 30 mins]Round the world activity to identify questions you might want to ask web audiences on a survey about YOUR CURRENT WEBSITE OR NEW ONETake your new cheat sheet with you!DO NOT WORRY ABOUT WORDING THEM WELL; JUST WANT YOU WANT TO KNOW IF YOU SURVEYED SITE USERS TODAYRound 1: 5 papers/5 groups (Outcomes/Impact; Satisfaction/Reactions, Participation/Reach, Activities, Planning/Needs)EVERYONE NEEDS MARKER. START AT ONE PAPER. SPREAD YOURSELVES OUT. ALL CAN WRITE OR YOU CAN DESIGNATE SOMEONE.ROTATE WHEN I SAY TO.Round 2: then voting (3 dots each)
  • Who should complete it? Who represents your users? Where might you find them?Less variability in a bigger sample. Probability sample generalizes to population. It’s ok to use non-probability and to focus on RIGHT users. Do you need to pay or otherwise incentivize it?What happens if you don’t get enough?[ACTIVITY: 20 min]Who are the major “groups” of website users? Let’s pick 3.Get in 3 groups.I will assign a user group to each table.Come up with ideas about how you will find these potential survey-takers. What might make them more likely/willing to take your survey?What might you want to know about them? (Demographics or other characteristics.)Share out.
  • Questions need to be carefully worded.Think about culture and cultural nuances. Think about ages, languages, and literacy levels. May need more than 1!Think about how you want to order your questions.Think about the format– making sure the survey is super clear and has an attractive design (easy to read).Again, no Lone Rangers! Work as a team. Review it. More eyes will help you catch things that might not be user-friendly!
  • [GROUP]Who can explain the difference?Why might you choose one over the other?
  • [NEXT SLIDE]
  • It’s their words!You can get unanticipated things you may never have thought of.You will get diverse, wide-ranging answers.
  • They are harder to answerThey might be hard to analyze or make sense of when you get them backIt can be hard for people who don’t like to or aren’t as good at writing a lot. You may get short replies.
  • [NEXT SLIDE]
  • You see the answers right in front of you. Just like multiple choice tests at school, right?Not hard to analyze and categorize, since you did that work up-front.
  • The answer you want may not be there. It may bias people to the answers you want to see, instead of the real ones. You won’t get anything unintended or unexpected.
  • [ACTIVITY: 20min]Get in pairs.Distribute index cards and markers (8-10 cards per pair– MAKE SURE 40 white cards and 20 colored cards are LEFT for later!).Please, in your pair, rephrase the question in as many ways as possible and writeeach question on a separate card.Do some open, some closed.…after a few minutes, ask ALLpairs to place their cards on ONE table,Gather around.Work together to cluster “like”(similar) questions intogroupings (piles). Don’t force cards into a pile. Look at thegroupings as a whole group and discuss:1. What is different about the different questions?2. Even within clusters, what different wording do you see?Point out the many ways a person can pose a question.3. What is different about the information that the differentquestions might yield?4. Why might you choose one way of wording a question overanother?5. Wrap up with a discussion of the difference between thequantitative and qualitative data that different questionsprovide. What does each mean for analysis?[GO BACK TO SEATS]When are you likely to use an open-ended question? A close-endedquestion?
  • A crowd favorite![ACTIVITY: 20-30 min]Form DIFFERENT PAIRS.Distribute the handout What’swrong with these questions? Assign 3-4 questions to each pair.Ask each pair to identify problems with their questions. Have themre-write the questions to eliminate the problems and be prepared toshare with the full group.As a full group, discuss and review each question and revisedwording. Add content as needed.Point to back side of handout: Checklist: Avoidingcommon problems in question wording.What did you learn today about writing questions?What is one thing you will do the next time you write aquestionnaire?
  • [BRAINSTORM]1. What makes a questionnaire attractive – something youwant to complete?2. What makes a questionnaire readable? What makes it easyto follow and complete?3. What respondent characteristics do you need to consider?4. Do these answers differ depending upon who is completingthe questionnaire, i.e., teens, seniors, people of differentethnic origins?
  • …Separate different components of a questionnaire with different type styles, sizes, lines, etc.Don’t just use “default.”Use arrows to show respondents where to go. Look at lots of bad ones!
  • Shorter usually generates a higher return.Don’t over-burden the respondent. You can always do TWO.Other formatting can make length seem more “reasonable.”
  • Include: Sponsor, purpose, use, confidentiality, etc.Include: Instructions for how to answer the questions (E.g., check all that apply).Arrange questions so they flow naturally.Demographics can be a bit off-putting… best left to end. Be consistent with numbers, format, scales, etc.
  • Start with something easy, as you would in conversation. Don’t jump right into sensitive or controversial things!Address important things early.Move from specific questions to general. Move from closed to open.
  • Only include what you’ll use. How will you use gender? Age? Income?May want to preface with reason you need it (E.g., funders).May want to keep the information optional.[GROUP: Are demographics important to you all for the new site? Which ones? Why?]
  • [NEXT SLIDE]
  • [HANDOUT]GO THROUGHA quality survey is almost never written in one sitting, and often not by one person.A list of questions (like we made today) is just the starting point.Many factors affect response![BRAINSTORM: 5 min] Let’s pilot some of the questions you came up with earlier – the ones on the wall with the most dots!Using the checklist and what we’ve gone through this morning, how might we fix…. Which one(s) might you want to lead with? End with? What kind of opening/explanation would you give about the survey?
  • Doesn’t have to be a “letter.”But it should contain:Why you’re botheringHow you’re going to use info providedImportance of their response/inputHow and when to respondWhether it will be anon or confidentialYour appreciationPromise results, if you can (like a new site!)Signature/contact if needed Pre-test this too!
  • At one time? For how long? Do you want to ask these people if they want to sign on for MORE eval (like usability testing)?Do they need incentives? Money? Raffle? No late fees? What else could you offer?
  • You can get a lot from SurveyMonkey, but not everything.May need help (evaluator; stats student/intern).Does a survey about a website have to be online? Not at all!A paper survey might also work well for your audience. (Remember, then there’s data entry and no analysis….)Think about what you want in your report before you send out the survey. Will you get what you’re after?Reporting and clear eval communication is absolutely the key to use. Make it look nice. Think about your audience.Be truthful.
  • [BRAINSTORM: What is UT? How many of you have done it before?]Usability testing fits in as one part of the user-centered design process. Representative users try to find information (or use functionality) on the Web site, while observers, including the development staff, watch, listen, and take notes. The purpose of a usability test is to identify areas where users struggle with the site and make recommendations for improvement.You can use usability testing to show that the benefits of usability engineering outweigh the costs. The types of problems that you might find costing time (and therefore money) are misleading navigational cues, poorly designed pathways, pages that are so dense they take a long time to use, etc.This method was first published by Clare-Marie Karat of IBM who used it to show a 100-fold return on investment for a particular software product. In that case, spending $60,000 on usability engineering throughout development resulted in savings of $6,000,000 in the first year alone.Tangible BenefitsUsable systems can save money by helping to-increase productivity and customer satisfaction-increase sales and revenues-reduce development time and costs and maintenance costs-decrease training and support costs
  • Establish baseline user performance and user-satisfaction levels of the user interface for future usability evaluations.You can use it as you continue and build new site.You should test early and test often. Usability testing lets the design and development teams identify problems before they get coded (i.e., "set in concrete”). The earlier those problems are found and fixed, the less expensive the fixes are.
  • To determine design inconsistencies and usability problem areas within the user interface and content areas. Potential sources of error may include:o   Navigation errors – failure to locate functions, excessive keystrokes to complete a function, failure to follow recommended screen flow.o   Presentation errors – failure to locate and properly act upon desired information in screens, selection errors due to labeling ambiguities.o   Control usage problems – improper toolbar or entry field usage.Exercise the web site under controlled test conditions with representative users. Data will be used to access whether usability goals regarding an effective, efficient, and well-received user interface have been achieved.
  • [GROUP: Would this group be different than the survey group? How? Who would be your “ideal group” of participants?]Can do this in person, by phone, by online survey…General QuestionsAre you male or female? [Recruit a mix of participants]Have you participated in a [focus group or usability test] in the past six months?Professional DemographicsWhat is your current position and title?How long have you held this position?Which of the following best describes your work environment? [e.g., commercial business, nonprofit, government agency, self-employed, etc.]Computer ExpertiseDo you use a computer? [If no, terminate]Beside read email, what are typical activities you do on the computer?  [e.g., gaming/entertainment; reading the news; shopping/banking; graphic design/digital pictures; programming/word processing, etc.About how many hours per week do you spend on the computer?  [Recruit a range of use, e.g., 0 to 10, 11 to 25, 26+ hours per week]Domain KnowledgeTime at library, etc.Contact Information[If the person matches your qualifications, ask] May I have your contact information?
  • This is a time consuming, but amazing, process. The first thing is to block out your time, and know that typically you will need no less than 3-4 staff for each test.You have the tech already– your existing site.Training is partially happening today, but should be ongoing. The more you practice the better you get. It’s more art than science. Having a really clear protocol is essential. Planning your work and working your plan with precision matters in UT.I will give you a handout later with some awesome resources to help with these.
  • This is general; should be customized for your needs.Participant’s interaction with the Web site will be monitored by the facilitator seated in the same space.Note takers and data logger(s) will monitor the sessions– either in the same room or observation room, connected by video camera feed.The test sessions will be videotaped. Argument about whether you want to see their eyes or their screen. If you have screen capture software you might do both. Could have observers see face and video get screen.Evaluating the application, rather than the facilitator evaluating the participant. Participants will sign an informed consent that acknowledges: participation is voluntary participation can cease at any time session will be videotaped but their privacy of identification will be safeguardedThe facilitator will ask the participant if they have any questions.At the start of each task, the participant will read aloud the task description from the printed copy and begin the task. Time-on-task measurement begins when the participant starts the task. The facilitator will instruct the participant to ‘think aloud’ so that a verbal record exists of their interaction with the Web site/Web application. The facilitator and observers will observe and enter user behavior, user comments, and system actionsAfter each task, the participant will complete post-task questions and elaborate on the task session with the facilitator. After all task scenarios are attempted, the participant will complete the post-test satisfaction questionnaire.
  • Trainer·       Provide training overview prior to usability testing, prep the protocol; finalize the questions; me, but also maybe one of you (Amy!)Facilitator·       Provides overview of study to participants·       Defines usability and purpose of usability testing to participants·       Assists in conduct of participant and observer debriefing sessions·       Responds to participant's requests for assistanceData Logger·       Records participant’s actions and commentsTest Observers·       Silent observer·       Assists the data logger in identifying problems, concerns, coding bugs, and procedural errors·       Serve as note takers.Test Participants
  • Nothin special!Can use conference room, meeting room, office– whatever.It’s best if the space is consistent to rule out variability. It’s also important it’s comfortable. Heat and light matter. Coffee, tea, water are good. This person is a guest and doing us a favor!
  • I mentioned consent (and assent if they’re younger), but definitely go beyond just saying it.Do it!All persons involved with the usability test are required to adhere to the following ethical guidelines:The performance of any test participant must not be individually attributable. Individual participant's name should not be used in reference outside the testing session.
  • To determine your site’s usability, you need to create measurable usability goals. Typical usability goals include time, accuracy, overall success, and satisfaction measures.You have started to define some of these already. Sometimes they’re a bit of a guess the first go-round!Again, this will help you benchmark so you have something to compare the new site to.Try to resist Googling ‘benchmarks’. They should be unique to you!If you don’t know where to start, starting with your analytics (we will talk about this this PM) is a good way to go.
  • Time: Set a goal for the overall time the user will need to carry out a task on your site. You can also break that down into separate goals for time to: Get to the right page Understand the information Recover from an error
  • Accuracy: Set a goal for the accuracy with which the user carries out the task. You can also break it down into separate goals for the number of: Unproductive navigation choices or searches Errors in use Misunderstandings of information
  • Success: Set a goal to measure users’ success with your site. For example:Identify hownew users will go to for help if they need it, find the help they need there, and get back to their original task within 2 minutesSet a goal that repeat visitors be able to successfully complete a task without using the help feature
  • Satisfaction: Set a goal that users are happy with their experience on your site. You can also set separate satisfaction goals for: Navigation Search Content detail and language
  • When you set usability goals, you cannot say, "the system response time is going to be very slow, so we will set our time goal to account for that slow response." Users will leave your site if it is too slow. You must set a goal that matches users' needs and expectations and find a design solution to improve systemresponse time if that is going to keep you from meeting the usability goal.If users give the site low ratings, you need to fix your site. However, if users give the site high ratings, you may not be getting a true picture.
  • Be skeptical!Users often give high satisfaction ratings even when they have problems using a site. They may: Be blaming themselves for the problems Not want to hurt your feelings Be being polite rather than saying what they really think[GROUP BRAINSTORM: What can you do to mediate the pleaser?]
  • Scenarios are the main vehicle or usability testing and the goal is always completion.Each scenario will require, or request, that the participant obtains or inputs specific data that would be used in course of a typical task.  The scenario is completed when the participant indicates the scenario's goal has been obtained (whether successfully or unsuccessfully) or the participant requests and receives sufficient guidance as to warrant scoring the scenario as a critical error.
  • Verb-based tasks ask users to accomplish a specific action with the product. Verb-based tasks are most commonly used to test software, hardware, and web applications.All of the tasks begin with a verb and ask users to complete a specific action. Verb-based tasks effectively evaluate the product's functionality and give teams the capability to test multiple users on the same tasks. Before the advent of the Web, almost all tasks for evaluating products were verb-based tasks.
  • Unlike verb-based tasks, we don't use scavenger hunt tasks to evaluate functionality. Instead, scavenger hunt tasks help us to assess content-rich systems such as information-rich web sites.With scavenger hunt tasks, we ask users to find a specific piece of information. These tasks help design teams evaluate whether users find and understand the product's content. The tasks almost always begin with the verb, "find.”The downside of traditional tasks such as verb-based and scavenger hunt tasks, is that it's challenging for teams to predict whether they've chosen realistic tasks for users to accomplish. Because of this, teams risk giving users tasks to complete with the product that aren't related to what they would actually do in a real-life situation.Test it on friends/family who don’t know site well!
  • To address the limitations of verb-based and scavenger hunt tasks, we use interview-based tasks, a task methodology developed by User Interface Engineering. With interview-based tasks, we interview users before and during the test to uncover users' real goals with a product. During the recruitment phase, we screen candidates to ensure they have the appropriate interests before they come to the lab.With interview-based tasks, when users first arrive for the test session, we don't actually know what we'll specifically be asking them to do during the session. Instead, at the beginning of the test, we interview users to get a better idea of how they use a product.Based on the users' specific responses to the questions, we'll work with them during the session to create tasks that are relevant to their specific needs. While we won't ask all users to complete the same tasks, we get a very good sense of how the product works for users in the real world.Interview-based tasks work best with sites and products that are almost ready to ship and populated with real information and data. Without real content or data for users to manipulate, it's impossible to mirror the true experience for the user.
  • [ACTIVITY: 15-20 min]World café. One HOST at each table. For the “interview based” table, think of what you might ask to get participants to start giving you ideas. For example, “Tell me about your relationship with reading.” What might elicit some good paths?The rest of you ROTATE when I call time. Discuss after:Do you see plusses and minuses?What do you think makes the most sense for your site?Does it depend on the user’s demographics at all?
  • Scenario Completion Time (time on task)The time to complete each scenario, not including subjective evaluation durations, will be recorded.
  • Critical errors are deviations at completion from the targets of the scenario.  Obtaining or otherwise reporting of the wrong data value due to participant workflow is a critical error. Participants may or may not be aware that the task goal is incorrect or incomplete.Independent completion of the scenario is a universal goal; help obtained from the other usability test roles is cause to score the scenario a critical error.Critical errors can also be assigned when the participant initiates (or attempts to initiate) and action that will result in the goal state becoming unobtainable.  In general, critical errors are unresolved errors during the process of completing the task or errors that produce an incorrect outcome.
  • Non-critical errors are errors that are recovered from by the participant or, if not detected, do not result in processing problems or unexpected results.  Although non-critical errors can be undetected by the participant, when they are detected they are generally frustrating to the participant. These errors may be procedural, in which the participant does not complete a scenario in the most optimal means (e.g., excessive steps and keystrokes).  These errors may also be errors of confusion (ex., initially selecting the wrong function, using a user-interface control incorrectly such as attempting to edit an un-editable field).Noncritical errors can always be recovered from during the process of completing the scenario.  Exploratory behavior, such as opening the wrong menu while searching for a function, may be coded as a non-critical error.
  • Subjective evaluations regarding ease of use and satisfaction will be collected via questionnaires, and during debriefing at the conclusion of the session.  The questionnaires will utilize free-form responses and rating scales.
  • To prioritize recommendations, a method of problem severity classification will be used in the analysis of the data collected during evaluation activities.  The approach treats problem severity as a combination of two factors - the impact of the problem and the frequency of users experiencing the problem during the evaluation.
  • Testing the Site NOT the UsersWe try hard to ensure that participants do not think that we are testing them. We help them understand that they are helping us test the prototype or Web site.Performance vs. Subjective MeasuresWe measure both performance and subjective (preference) metrics. Performance measures include: success, time, errors, etc. Subjective measures include: user's self reported satisfaction and comfort ratings.People's performance and preference do not always match. Often users will perform poorly but their subjective ratings are very high. Conversely, they may perform well but subjective ratings are very low.Make Use of What You LearnUsability testing is not just a milestone to be checked off on the project schedule. The team must consider the findings, set priorities, and change the prototype or site based on what happened in the usability test.Find the Best SolutionMost projects, including designing or revising Web sites, have to deal with constraints of time, budget, and resources.Balancing all those is one of the major challenges of most projects.
  • http://www.cpl.org/Home.aspx[ACTIVITY: 10 min]The rest of you please observe!Think about what data you might record, what you might note, what you would do differently.DEBRIEF:Helper, how did it feel to you?Group, What did you notice?What types of tasks did I ask her/him to do?Did you see any critical errors? Non-critical?What did you hear in the subjective evaluation at the end?What worked/didn’t?What data would you collect/note? (Time, pathway, success, severity rating)
  • http://www.youtube.com/watch?v=3Qg80qTfzgU
  • The best usability test moderators have a lot in common with an orchestra conductor.How do you become a great moderator? Simple—with practice. Facilitating sessions is a learned skill that improves the more you do it. There are some simple tricks and techniques behind it. Once you learn those, and have a chance to practice them, you too can become a top-notch moderator.An important trick to moderating is mastering the multiple personalities involved. Jared M. Spool, uie.com
  • From the moment the participant walks in the door, the moderator helps them feel at home. They get them coffee, explain the procedure, and answer questions. (The best moderators start before the participant arrives, by working with the recruiters to set the right expectations and answer any questions.)During the session, they smile a lot, keeping the session relaxed. They watch diligently for any signs of stress."You're doing well." "This is helping a lot." "You're helping us discover problems we didn't realize we had.”Safety and comfort: that's the flight attendant's focus.
  • The sportscaster personality's job is to make sure every observer in the session catches all of the action.When we're facilitating usability tests, we start by setting up a projector in the room, so it's easy for the observers to see what's on the participant's screen. We encourage the participant to "think out loud", letting us know what's going through their head as they use the design.For those participants that are naturally quiet, we engage in a "color commentary", where we repeat and narrate the activity.Sportscaster kicks in to ask questions to better understand the participant's viewpoint.The sportscaster knows her audience. She caters the session to the folks who are watching.Catching all the action: that's the sportscaster's focus.
  • The scientist personality looks for the data.Since the goal of any user research is to help the team make better design decisions, the scientist is there to collect the data and help the team analyze it.Like the other roles, this starts long before the participant shows up. The scientist puts together the test plans, deciding the tasks the participants will try. The scientist creates questionnaires and interview scripts, to learn more about the participant's background and experience. Every thing the scientist does is to make sure the team collects every piece of data they'll need.Part of the preparation involves how the findings are used once the sessions are completed. How will the team analyze and synthesize this information?Guiding the data collection: that's the scientist's focus.Once you master the three priorities, you'll find it easy to get the team excited about testing. They'll come out of the session energized and itching to make improvements. And that's what good user research is all about.
  • We want to predict future behaviors, so we can make designs that service them. Yet just asking participants may not get the actual answer.I've found the best way to predict future behavior is to look at past behavior. Asking a participant about what they've done in the past is a better way to get those answer.Instead of asking, "do you think you might need to store messages in different folders?", I've found it better to ask, "when you're done with messages, what do you do with them?" I focus on what they've done in the past, looking for that behavior to tell me what makes sense.Users aren't designers. (If they were, they wouldn't need us.) They don't know how to deal with constraints. They haven't really thought the problem through. Try to avoid asking, “How would you design it?” Or, “If you were a designer…” Many templates will tell you you should. It’s largely useless."Is the reason you don't click this button because it's really hard to see?”Be creative about exploring the participant's understanding of the interface. For example, when I'm curious about why they didn't click a particular button, I ask them to talk about what each button does.Jared M. Spool, uie.com
  • Elaborated scenarios give more user story details. These details give the Web team a deeper understanding of the users and users’ characteristics that may help or hinder site interaction. Knowing this information, the team is more likely to develop content, functionality, and site behavior that users find comfortable and easy to work with.E.g. Mr. and Mrs. Macomb are retired schoolteachers who are now in their 70s. Their Social Security checks are an important part of their income.They've just sold their big house and moved to a small apartment. They know that one of the many chores they need to do now is tell the Social Security Administration that they have moved.Full scale task scenarios include the steps to accomplish the task. A full-scale scenario can either report all the steps that a specific user currently takes to accomplish the task or it can describe the steps you plan to set up for users in the new site.Similar to a use case.[NEXT SLIDE FOR NEXT 2]
  • A persona is a fictional person who represents a major user group for your site.Personas help you identify major user groups of your Web site. You select the characteristics that are most representative of those groups and turn them into a persona.A persona usually includes a name and a picture. You will need to add some demographics such as age, education, ethnicity, or family status. Give the persona a job title and include their major responsibilities. Include the goals and tasks they are trying to complete using the site and their environment (i.e., physical, social, and technological). Also you can include a quote that sums up what matters most to the persona as it relates to your site.You make up the persona's name. Select one that represents that user group. Be relevant and serious; humor usually is not appropriate here. (Use licensed or stock.)Using personas helps the team focus on the users’ goals and needs. The team can concentrate on designing a manageable set of personas knowing they represent the needs of many users. By always asking, "Would Jim use this?" the team can avoid the trap of building what users ask for rather than what they will actually use.Design efforts can be prioritized based on designs. Designs can be constantly evaluated against the personas and disagreements over design decisions can be sorted by referring back to the personas.Big companies, like Microsoft and Staples, swear by these. [ACTIVITY: 10-15min]In groups of 3-4, come up with 2-3 personas you think would work well to test your current and/or future website.Be as vivid in the details as you can.Draw the person, or try to find a picture on the web that could be them!
  • Email me with the picture(s) if you want to… You don’t have to, but could be fun. Then I can pull them up on screen.
  • Tasks analysis can help you:Understand tasks your website must supportDetermine appropriate content scopeRefine or re-define your site’s navigation or search to better support users' goalsBuild pages and applications that match users' goals, tasks, and stepYour task analysis may have several levels of inquiry, from general to very specific. You can ask users what overall tasks they are trying to accomplish or how they currently accomplish the task.What overall tasks are users trying to accomplish on our website?How are users currently completing the task? Ask 4 people what they might use website for or what they ARE using it for. Check your own assumptions. Everyone is an evaluator.Can also ask same people about personas. Describe how your best friend, parent, or child might use the library website? Tell me about them.
  • The Usability Test Report will be provided at the conclusion of the usability test.  It will consist of: A report and/or a presentation of the resultsEvaluation of the usability metrics against the pre-approved goalsThe Subjective evaluationsSpecific usability problems Recommendations for resolutionThe recommendations should be prioritized to aid in implementation strategy.
  • I will leave this largely in your hands, though there are elements of user-centered design and testing that will come in handy and I am going to introduce you to one of them.
  • Participants in a card sorting session organize the content from your website in a way that makes sense to them. Participants review items from your website and then group these items into categories. Card sorting is a method used to help design or evaluate the information architecture of a site.  In a card sorting session, participants organize topics from your website into categories that make sense to them. Participants may also help you label these groups. Card sorting may involve physical cards or pieces of paper, or it may be accomplished with one of several online card-sorting software tools.Card sorting will help you understand your users’ expectations and understanding of your topics. Knowing how your users group information can help you:Build the structure for your websiteDecide what to put on the homepageLabel categories and navigation
  • In an open card sort, participants are asked to:Organize topics from content within your website into groups that make sense to them and thenName each group they created in a way that they feel accurately describes the content.Use an open card sort to learn how users group content and the terms or labels they give each category.
  • In a closed card sort, participants are asked to sort topics from content within your website into pre-defined categories.A closed card sort works best when you are working with a pre-defined set of categories, and you want to learn how users sort content items into each category.Combination card sortConduct an open card sort first to identify content categories. You can then use a closed card sort to see how well the category labels work.
  • Participants think aloud while sorting, giving a clearer picture of their reactions and thought processes. This type of sort may be completed with physical cards or with online card-sorting software and the facilitator looking on and asking questions as needed.
  • More than one participant at a time. Participants sort a set of cards independently. The facilitator may brief the participants at the beginning and debrief the participants at the end, but the participant works alone for most of the session. Because of the limited interaction, you can have many sessions at the same time with one facilitator. You must have as many sets of cards as concurrent sessions or have each participant at a separate computer if using online card-sorting software.They shouldn’t work together. (Gets muddy.)
  • Allow you to have many participants in many locations. You do not get information on why participants sort the cards the way they do, because you cannot see the participants or hear them thinking out loud.Participants sort the cards independently on their own computers. You can do open or closed card sorts remotely. Several software programs exist to help you with large-scale remote card-sorting studies. Using the software is an advantage because it analyzes the data for you.
  • [GROUP: Who would these participants be? How would you find them?]
  • Create your list of content topics. Topics can be phrases, words, etc., and can be very specific or more general. It might be tempting to have a card for every topic on your site, but in this case, more might not be better. Consider the cognitive load on the participant. You want them attentive! I would limit to 50-60.I would recommend 30-40.You have been doing a content inventory. Use this!Cards neat, legible, and consistent. You'll have the list of topics in the computer for later analysis.Number the cards in the bottom corner or on the back. This helps you when you begin to analyze the cards.Have blank cards available for participants to add topics and to name the groups they make when they sort the cards.Consider using a different colored card for having participants name the groups.For paper card sorts, ensure the participant has enough room to spread the cards out on a table or tack/tape them up on a wall or DRY ERASE.A conference room works well.For online card-sorts, ensure there is a computer with an internet connection available as well as room for both the participant(s) and facilitator to sit comfortably.Plan to have the facilitator or another usability team member take notes as the participant works and thinks aloud.As with other techniques, arrange for payment or other incentives to thank the participant for spending the time and effort helping you.Let the participant work. Minimize interruptions but encourage the participant to think aloud. Allow the participant to:Add cards - for example, to indicate lateral hyperlinks or additional topics.Put cards aside to indicate topics the participant would not want on the site.If, at the end, the participant has too many groups for the homepage, ask if some of the groups could be combined.Ask the participant to name each category.
  • If you used physical cards for the test, either photograph the sort or use the numbers on the cards to quickly record what the participant did. Photograph or write down the names the participant gave to each grouping and the numbers of the cards the participant included under that name. Then you can reshuffle the cards for the next session.Create a computer file for each session to gather a complete picture of the detailed site map each user creates.Work from your original list of topics and move topics around to recreate each participant's groupings and enter that participant's name for the groupings.If you used a physical card sort, you can also take a photograph of the finished card sort for reference later.Consider qualitative information based on user comments.Analyze quantitative information based on:Which cards appeared together most oftenHow often cards appeared in specific categories[ACTIVITY: 10-15 min]Split into 2 groups. I am giving each of you some cards. (20 white cards, 10 colored– per group)I want you to pretend you need to make a website for your friend who is having a baby.Come up with content topics and structure them in a logical way that makes sense for your group.If you want to include some pre-defined content categories you can– use the colored cards for that. If not, you can leave the cards without headings and ask the other group to come up with some. Your choice. Take a photo when done, then mix up the cards.TRADE TABLES. Try to do a card sort.After we’re done we’ll compare and contrast. USUALLY YOU WOULD NOT DO THIS IN GROUPS.
  • http://www.youtube.com/watch?v=TNvdgXCqEvM[DISCUSS]What’s wrong with this example? Would you change anything?What did you like?
  • Test early. Test often. You should be doing usability testing throughout the process-from baseline testing on the old site, tests with partial and low-fidelity prototypes, and testing both navigation and content as you fill out the site. In a heuristic evaluation, usability experts review your site’s interface and compare it against accepted heuristics (usability principles). The analysis results in a list of potential usability issues.A heuristic evaluation should not replace usability testing. Although the heuristics relate to criteria that affect your site’s usability, the issues identified in a heuristic evaluation are different than those found in a usability test.If you cannot implement all the recommendations, develop priorities based on fixing the most global and serious problems. As you prioritize, push to get the changes that users need. The cost of supporting users of a poorly-designed site is much greater than the cost of fixing the site while it is still being developed.The iterative design process in which you develop a partial prototype, test early, fix and expand the prototype, test again (and repeat) is the most successful way to develop a Web site.
  • Asking these questions isn't evil. We're not violating some Geneva-Convention-type of international agreement. We won't be hauled away by the User Research Police.However, there is a price to asking the wrong questions. When conducting user research, the most valuable moments are limited times that the team spends with each participant. It's important to make every second count.Asking one of these questions wastes those precious moments. They take time without returning value. The participant tries (and, usually tries really hard), but that trying doesn't pay off and eats down the clock.Learning to focus on the right questions can help you get the most out of each session.Jared M. Spool, uie.com
  • How do you become a great at UCD and UT? Simple—with practice. A learned skill that improves the more you do it. And put it to USE.
  • QUESTIONS?[BREAK][MOVE ON TO GOOGLE ANALYTICS][ADD GOOGLE TRENDS AND TAGXEDO IF THERE’S TIME]
  • Baltimore Pratt Library: UCD and Usability Testing Training

    1. 1. KATE TINWORTH EXPOSEYOURMU SEUM LLC @EXPOSYOURMU SEUM
    2. 2. EMAIL ME THE PICTURES: kate@exposeyourmuseum.com

    ×