usability testing
Upcoming SlideShare
Loading in...5
×
 

usability testing

on

  • 499 views

 

Statistics

Views

Total Views
499
Views on SlideShare
499
Embed Views
0

Actions

Likes
0
Downloads
6
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • Assessment tests Will involve full prototypes, will focus on tasks, may involve quantitative measurements Validative tests Objectives and goals are typically quantitative Results are used to set standards, performance criteria Test involves reduced interaction between test participant and moderator Test conditions need to be fairly well controlled
  • You can also test how many features the user can remember during a debriefing session. The problem with metrics-based testing is that the individual users may vary widely -- measurements of speed or memory or whatever may depend more on the characteristics of a particular user than of the site or interface you are testing. One solution is to test so many users that you can apply statistical test of validity to the results. This can obviously be expensive. It also requires some statistical expertise.

usability testing usability testing Presentation Transcript

  • Usability Testing Kathryn Summers © 2003
  • Usability goals
    • Usefulness (enables user to achieve her goals)
    • Effectiveness, or ease of use (can sometimes be defined quantitatively)
    • Learnability
    • Likeability
  • Costs of Poor Design
    • Costs to client
      • increased support costs
      • reduced sales
      • expensive redesign
    • Costs to customer/user
      • reduced employee productivity
      • increased employee frustration
      • more frequent mistakes
  • Two kinds of usability testing
    • Evaluative
    • Exploratory
      • (exploratory, assessment, validation, comparative)
    • A usability test is NOT a focus group. A usability test looks at what users DO (at their process and behavior), not at what users SAY they do or want.
    • Listen to what they say they want, but don’t let the test slip into focus group mode.
  • Evaluative Testing
    • Confirm that you’ve met a benchmark for usability before release
    • Confirm that product is more usable than a prior version or a competing product
    • Evaluative testing will focus on tasks and may often involve quantitative measurements
  • Quantitative testing
    • Objectives and goals are quantitative
    • Results often used to set standards, performance criteria
    • Test involves reduced interaction between test participant and moderator
    • Test conditions need to be fairly well controlled
  • Metrics-based Testing
    • time on task
    • error rate
    • successful completion rate
    • number of times (or how long) help system is accessed
    • time spent recovering from errors
    • number of commands/features used
  • Problems with quantitative testing
    • Individual users may vary widely -- measurements of speed or memory or whatever may depend more on the characteristics of a particular user than of the site or interface you are testing.
    • One solution is to test so many users that you can apply statistical tests of validity to the results. This can obviously be expensive. It also requires some statistical expertise.
    • Results of testing may not help solve the problems identified (no guidance for redesign)
  • Exploratory tests Testing prototypes, or comparing prototypes
    • Focus on exploring relationship between system image and user’s mental model
      • Representing classes of objects
      • Representing relationships between objects
      • Allowing user to manipulate objects
    • Test navigation, help access, subject matter organization
  • The test plan
    • Create a problem list/test objectives
    • Translate problem list into tasks
    • Prioritize tasks (frequency, criticality, suspected problems), decide which tasks to test, write test script (scenarios)
    • Identify resources test participants will need for the tasks
      • Hardware, software, data files, instructions, internet connection, time
    • Decide what data/measurements to collect
    • Conduct a pilot test
  • Task scenarios
    • Scenarios are tasks put into a short narrative context. They are meant to take some of the artificiality out of the task.
      • Use the user’s language ( not the system’s language)
      • Keep them short.
      • Test to make sure scenario is not ambiguous, and that you’ve provided enough info to do the task
      • Provide the goal, but no instructions
    • Scenario can be delivered verbally by the test monitor, in written form, or role-played by the test team
  • Characteristics of a good task
    • Based on a goal that matters to users
    • Connects to your product/business success
    • Appropriate scope (not too hard, but not trivial, won’t take too long)
    • Has a clear endpoint
    • Helps you see what users do, not just listen to what they say
  • Selecting test participants
    • Identify your target user groups
      • (demographics, background knowledge, experience, buying habits, goals)
    • Decide which groups you want, and how many users you’ll recruit
      • (decide what characteristics users will share, what characteristics will vary)
    • Draft a test participant screener
    • Test the screener to see if it works
  • Being a good moderator
    • Build rapport with participants
    • Avoid influencing the results
    • Be flexible
    • Listen carefully, pay close attention
    • Watch details but within larger context
      • Cohesive picture of each test
      • Patterns between tests
    • Don’t just rely on memory
    • Communicate results effectively
    • Stay organized
    • Practice!!!
  • Moderating problems to avoid
    • Leading or helping the user (type of question, body language), answering questions
    • Not asking follow-up questions to understand the user’s mental model or thought process
    • Putting pressure on the user by reinforcing task success rather than user feedback
    • Not paying enough attention to the user (taking notes, thinking about test or other things,
    • Being too rigid
    • Not establishing rapport, not making user comfortable
    • Jumping to conclusions
  • Ways to improve
    • Watch other moderators
    • Watch yourself on tape
    • Listen to Kathryn  (work with mentor)
    • Watch your tapes, to catch things you missed and to get a better sense of what to watch for
  • Limitations of usability testing
    • Testing situations are always artificial, and thus will influence the results
    • Test participants are usually not perfectly representative users
    • The test design can never fully duplicate natural user behavior
  • Experts on expert evaluation
    • Rubin says testing is not always the best technique to use; he sometimes recommends an expert evaluation instead.
    • Fine in early stages of the project—no need for users to tell you the nav is inconsistent, or that there’s no error recovery. But you need usability testing in later stages of the project.
    • Nielsen claims 5 usability experts will find 80% of the problems (DR 67); I don’t believe it.
    • Ginny Redish says expert review will find the greatest number of the little problems; you need usability testing to find the major problems.
  • User-centered designs
    • Are more useful (can do more tasks that the user cares about)
    • Are easier to learn and use
    • Are more enjoyable to use (more likable)