Beyond Usability Testing: Assessing the Usefulness of Your Design

  • 500 views
Uploaded on

Usability tests are meant to find usability problems. If your question is, “where are the usability problems in this design”, usability testing is right for you. With usability testing, can study how …

Usability tests are meant to find usability problems. If your question is, “where are the usability problems in this design”, usability testing is right for you. With usability testing, can study how well someone can get from point A to point B and where are the problems along the way. Finding usability problems is the focus, and the method works great.

But, we are finding that many of the questions business sponsors and stakeholders have are not about finding usability problems. The questions they have are more about the overall usefulness of a design, its potential for success, and how well it meets expectations.

This presentation will define usefulness research, show how it is different from usability tests, and offer different approaches for asking the right questions of users. Whether you think this is slap-your-forehead obvious or a method that needs to be expanded and refined, we seek to have a lively conversation.

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
No Downloads

Views

Total Views
500
On Slideshare
0
From Embeds
0
Number of Embeds
1

Actions

Shares
Downloads
19
Comments
0
Likes
1

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide
  • Goal of usability testing is to find usability problems.
  • Transactional Applications
  • Transactional Applications
  • Transactional Applications
  • Content / Informational Sites
  • Content / Informational Sites
  • Content / Informational Sites
  • Gaming
  • Gaming
  • Gaming
  • Social Sites
  • Transactional Applications
  • Transactional Applications
  • Marketing / Persuasive
  • Marketing / Persuasive
  • Marketing / Persuasive
  • So when a client wants discount research to guide their UX strategy and wants to learn about the value and usefulness of a prototype or existing interface… what do we do? We turn to what we know – usability testing.
  • What strengths can we take from usability testing?How should the methodology evolve?What questions should we pose to elicit this information from participants?What is an acceptable prototype fidelity level?
  • Moderator and participant interact one-on-one (except for unmoderated) Three primary components:Pre-task questions: moderator interviews participant to clarify demographics, assess level of expertise, and gauge prior experienceTasks: participant proceeds through defined set of tasks with design artifact while “thinking aloud”. Moderator minimizes intrusion, identifies usability problems, and tracks metrics such as time on task and completion percentagePost-task questions: moderator asks participant about their experience with the tasks, level of satisfaction with the design, and points of confusion or ambiguityNo longer just determining demographics and level of experienceRather, need to set participant’s mindset for an evaluation of the design from a usefulness perspectiveBorrow from Ethnography, Contextual Inquiry, Interviewing and Laddering Daily task flow, current mechanisms Pain points Expectations, desires Ideal scenarios Other systemsEmphasis changes from contrived tasks to participant-directed tasks Tasks determined from pre-task and probing questions“Tell me how you would use this application in your daily routine. Okay, now please go do some of those on this prototype.”“You mentioned that you would need to determine your size – how would you do that?”Gather expectations at outset of the task “What would you expect to happen? What information would expect the page to contain?”Sets the basis for participant commentary (not that you should design exactly what they say, but it can inform ideas)This may not be a natural way to interact with the design, but we’re not testing usabilityFollow up questions emphasize expectations and value “How did completing this task compare with your expectations? Was it better or worse and why? What’s missing? What’s superfluous?” “How does it compare with other applications or systems?” “What other features or functions would you need when completing this task?”Participant directed to walk back through the task, responding to questions at key interaction pointsStrategic probing questions on salient components that interrupt or inform the task“What would make this more useful? How would you use this in your work/life?”“Does this table contain the columns that you need? Did the categories make sense?” Probe on areas that were not covered by the participant task“How would you get the detailed information about this product?"
  • Moderator and participant interact one-on-one (except for unmoderated) Three primary components:Pre-task questions: moderator interviews participant to clarify demographics, assess level of expertise, and gauge prior experienceTasks: participant proceeds through defined set of tasks with design artifact while “thinking aloud”. Moderator minimizes intrusion, identifies usability problems, and tracks metrics such as time on task and completion percentagePost-task questions: moderator asks participant about their experience with the tasks, level of satisfaction with the design, and points of confusion or ambiguityNo longer just determining demographics and level of experienceRather, need to set participant’s mindset for an evaluation of the design from a usefulness perspectiveBorrow from Ethnography, Contextual Inquiry, Interviewing and Laddering Daily task flow, current mechanisms Pain points Expectations, desires Ideal scenarios Other systemsEmphasis changes from contrived tasks to participant-directed tasks Tasks determined from pre-task and probing questions“Tell me how you would use this application in your daily routine. Okay, now please go do some of those on this prototype.”“You mentioned that you would need to determine your size – how would you do that?”Gather expectations at outset of the task “What would you expect to happen? What information would expect the page to contain?”Sets the basis for participant commentary (not that you should design exactly what they say, but it can inform ideas)This may not be a natural way to interact with the design, but we’re not testing usabilityFollow up questions emphasize expectations and value “How did completing this task compare with your expectations? Was it better or worse and why? What’s missing? What’s superfluous?” “How does it compare with other applications or systems?” “What other features or functions would you need when completing this task?”Participant directed to walk back through the task, responding to questions at key interaction pointsStrategic probing questions on salient components that interrupt or inform the task“What would make this more useful? How would you use this in your work/life?”“Does this table contain the columns that you need? Did the categories make sense?” Probe on areas that were not covered by the participant task“How would you get the detailed information about this product?"
  • Moderator and participant interact one-on-one (except for unmoderated) Three primary components:Pre-task questions: moderator interviews participant to clarify demographics, assess level of expertise, and gauge prior experienceTasks: participant proceeds through defined set of tasks with design artifact while “thinking aloud”. Moderator minimizes intrusion, identifies usability problems, and tracks metrics such as time on task and completion percentagePost-task questions: moderator asks participant about their experience with the tasks, level of satisfaction with the design, and points of confusion or ambiguityNo longer just determining demographics and level of experienceRather, need to set participant’s mindset for an evaluation of the design from a usefulness perspectiveBorrow from Ethnography, Contextual Inquiry, Interviewing and Laddering Daily task flow, current mechanisms Pain points Expectations, desires Ideal scenarios Other systemsEmphasis changes from contrived tasks to participant-directed tasks Tasks determined from pre-task and probing questions“Tell me how you would use this application in your daily routine. Okay, now please go do some of those on this prototype.”“You mentioned that you would need to determine your size – how would you do that?”Gather expectations at outset of the task “What would you expect to happen? What information would expect the page to contain?”Sets the basis for participant commentary (not that you should design exactly what they say, but it can inform ideas)This may not be a natural way to interact with the design, but we’re not testing usabilityFollow up questions emphasize expectations and value “How did completing this task compare with your expectations? Was it better or worse and why? What’s missing? What’s superfluous?” “How does it compare with other applications or systems?” “What other features or functions would you need when completing this task?”Participant directed to walk back through the task, responding to questions at key interaction pointsStrategic probing questions on salient components that interrupt or inform the task“What would make this more useful? How would you use this in your work/life?”“Does this table contain the columns that you need? Did the categories make sense?” Probe on areas that were not covered by the participant task“How would you get the detailed information about this product?"
  • Moderator and participant interact one-on-one (except for unmoderated) Three primary components:Pre-task questions: moderator interviews participant to clarify demographics, assess level of expertise, and gauge prior experienceTasks: participant proceeds through defined set of tasks with design artifact while “thinking aloud”. Moderator minimizes intrusion, identifies usability problems, and tracks metrics such as time on task and completion percentagePost-task questions: moderator asks participant about their experience with the tasks, level of satisfaction with the design, and points of confusion or ambiguityNo longer just determining demographics and level of experienceRather, need to set participant’s mindset for an evaluation of the design from a usefulness perspectiveBorrow from Ethnography, Contextual Inquiry, Interviewing and Laddering Daily task flow, current mechanisms Pain points Expectations, desires Ideal scenarios Other systemsEmphasis changes from contrived tasks to participant-directed tasks Tasks determined from pre-task and probing questions“Tell me how you would use this application in your daily routine. Okay, now please go do some of those on this prototype.”“You mentioned that you would need to determine your size – how would you do that?”Gather expectations at outset of the task “What would you expect to happen? What information would expect the page to contain?”Sets the basis for participant commentary (not that you should design exactly what they say, but it can inform ideas)This may not be a natural way to interact with the design, but we’re not testing usabilityFollow up questions emphasize expectations and value “How did completing this task compare with your expectations? Was it better or worse and why? What’s missing? What’s superfluous?” “How does it compare with other applications or systems?” “What other features or functions would you need when completing this task?”Participant directed to walk back through the task, responding to questions at key interaction pointsStrategic probing questions on salient components that interrupt or inform the task“What would make this more useful? How would you use this in your work/life?”“Does this table contain the columns that you need? Did the categories make sense?” Probe on areas that were not covered by the participant task“How would you get the detailed information about this product?"
  • Usability is still important!But, recognize when you have different goals and adjust accordinglyBe aware of nuances and differences between usability and usefulnessSet clear expectations and goals with project teamName it “usefulness study” rather than a “usability test”?Drawbacks: may need deeper prototypes, not getting at usability problems, Danger: asking the participant to help design it is easy to fall into, but not good

Transcript

  • 1. Beyond Usability Testing: Assessing the Usefulness of Your Design
    UPA Boston Mini-Conference 2011
    Prepared by:
    Michael Hawley – Chief Design Officer
    Daniel Berlin – Experience Research Director
    May 25, 2011
  • 2. Have You Run Usability Tests?
  • 3. Usability Testing
    Participants attempt to complete a set of defined tasks.
    Researchers learn what to improve by observing and interpreting think-aloud.
  • 4. Virzi, R.A., Refining the Test Phase of Usability Evaluation: How Many Subjects is Enough? Human Factors, 1992. 34(4): p. 457-468.
  • 5. Trend
    Business sponsors turn to us as UX professionals with questions that are not about usability problems.
    Rather, their questions are about overall user experience strategy, value and usefulness.
  • 6.
  • 7. Usability:
    Find interruptions in workflow that prevent users from performing tasks quickly and efficiently.
  • 8. Usefulness:
    Inform a re-structure of the application to best align with workflow.
    Determine where to position productivity tips and help buttons within the application for best utilization.
    Find optimal level of personalization and customization that users would take advantage of.
  • 9.
  • 10. Usability:
    Assess effectiveness of navigation system in guiding users to desired pages.
    Evaluate descriptiveness and clarity of links.
    Gauge ability of page layouts to orient users to relevant content.
  • 11. Usefulness:
    Identify content that is missing which will help overcome objections or answer critical questions?
    Understand how branded labels and content themes contribute to the overall experience or detract from it.
    Determine level/types of promotions and interstitials that are acceptable to users.
    Understand how different audience personas prefer to consume information for the particular domain.
  • 12.
  • 13. Usability:
    Determine optimal level of difficulty to encourage advancement to multiple levels of the game.
    Assess discoverability of game features and controls.
  • 14. Usefulness:
    Find the optimal rate of point accumulation and alignment with prize levels.
    Understand best use of social media within or around the game.
    Determine the threshold for ads, interstitials and registration for game play.
  • 15.
  • 16. Usability:
    Assess if users can figure out how to add a comment, share content, or use a tagging mechanism to find what they are looking for.
  • 17. Usefulness:
    Determine the most compelling and appealing topics or categories for conversation.
    Understand the level of involvement the sponsoring company should have in the social experience, if any.
    Balance branded or non-branded experience for optimal trust of the site.
    Determine the elements or attributes that should allow comment and review.
  • 18.
  • 19. Usability:
    Find any confusion points or interruptions that prevent users from registering.
    Find misleading or ambiguous terminology.
  • 20. Usefulness:
    Determine the most persuasive elements that will compel the target audience to register.
    Understand a design’s impact a user’s perception of the brand.
    Position the offering and messaging against the company’s competitors.
    Determine missing content that can help target audience make an informed decision about the product.
  • 21. Sound Familiar?
  • 22. Natural Reaction
    Turn to what we know: Usability Testing
    (one-on-one interviews, design artifact, and tasks)
  • 23. Are You Forgetting Contextual Inquiry and Foundational Research?
    Discovery research and needs analysis is valid, but:
    • Time and budget for separate research is not always an option
    • 24. Many participants need design artifacts to elicit appropriate reaction and commentary
  • Our Goal
    Leverage the strengths of usability testing but adjust our approach when objectives differ from finding usability problems.
  • 25. Three Components
  • 26. Three Components
  • 27. Three Components
  • 28. Three Components
  • 29. Summary
    Foundational research is still important.
    Usability testing is still important.
    However, recognize when you have different goals and adapt the research method as necessary.
  • 30. Additional Information
    Complete Presentation Slides
    • http://www.slideshare.net/hawleymichael
    • 31. http://www.slideshare.net/banderlin
    Contact Information
    Michael Hawley
    mhawley@madpow.net
    @hawleymichael
    Dan Berlin
    dberlin@madpow.net
    @banderlin