NO1 Popular kala jadu karne wale ka contact number kala jadu karne wale baba ...
Beyond Usability Testing: Assessing the Usefulness of Your Design
1. Beyond Usability Testing: Assessing the Usefulness
of Your Design
UPA Boston Mini-Conference 2011
Prepared by:
Michael Hawley – Chief Design Officer
Daniel Berlin – Experience Research Director
May 25, 2011
4. Virzi, R.A., Refining the Test Phase of Usability Evaluation: How Many Subjects is Enough? Human Factors, 1992. 34(4): p. 457-468.
5. Trend
Business sponsors turn to us as UX professionals with
questions that are not about usability problems.
Rather, their questions are about overall user experience
strategy, value and usefulness.
8. Usefulness:
Inform a re-structure of the application to best align with
workflow.
Determine where to position productivity tips and help
buttons within the application for best utilization.
Find optimal level of personalization and customization
that users would take advantage of.
9.
10. Usability:
Assess effectiveness of navigation system in guiding users
to desired pages.
Evaluate descriptiveness and clarity of links.
Gauge ability of page layouts to orient users to relevant
content.
11. Usefulness:
Identify content that is missing which will help overcome
objections or answer critical questions?
Understand how branded labels and content themes
contribute to the overall experience or detract from it.
Determine level/types of promotions and interstitials that
are acceptable to users.
Understand how different audience personas prefer to
consume information for the particular domain.
12.
13. Usability:
Determine optimal level of difficulty to encourage
advancement to multiple levels of the game.
Assess discoverability of game features and controls.
14. Usefulness:
Find the optimal rate of point accumulation and alignment
with prize levels.
Understand best use of social media within or around the
game.
Determine the threshold for ads, interstitials and
registration for game play.
15.
16. Usability:
Assess if users can figure out how to add a comment,
share content, or use a tagging mechanism to find what
they are looking for.
17. Usefulness:
Determine the most compelling and appealing topics or
categories for conversation.
Understand the level of involvement the sponsoring
company should have in the social experience, if any.
Balance branded or non-branded experience for optimal
trust of the site.
Determine the elements or attributes that should allow
comment and review.
18.
19. Usability:
Find any confusion points or interruptions that prevent
users from registering.
Find misleading or ambiguous terminology.
20. Usefulness:
Determine the most persuasive elements that will compel
the target audience to register.
Understand a design’s impact a user’s perception of the
brand.
Position the offering and messaging against the company’s
competitors.
Determine missing content that can help target audience
make an informed decision about the product.
22. Natural Reaction
Turn to what we know:
Usability Testing
(one-on-one interviews, design
artifact, and tasks)
23. Are You Forgetting Contextual Inquiry and Foundational Research?
Discovery research and needs analysis is valid, but:
• Time and budget for separate research is not always an
option
• Many participants need design artifacts to elicit
appropriate reaction and commentary
24. Our Goal
Leverage the strengths of usability testing but adjust our
approach when objectives differ from finding usability
problems.
26. Three Components
Phase Usability Usefulness
Pre-Task Demographics, level of Daily task flow, pain points,
Questions expertise, prior experience. expectations, desires,
Goal: validating groups scenarios.
and classify results. Goal: set mindset for
usefulness evaluation of the
design
Tasks
Post-Task
Questions
27. Three Components
Phase Usability Usefulness
Pre-Task Demographics, level of Daily task flow, pain points,
Questions expertise, prior experience. expectations, desires,
Goal: validating groups scenarios.
and classify results. Goal: set mindset for
usefulness evaluation of the
design
Tasks Pre-defined tasks, minimal Emphasis on participant-
moderator intrusion, directed tasks
Goal: find usability Goal: understand how proposed
problems, measure time design aligns with user needs
on task and completion
percentage.
Post-Task
Questions
28. Three Components
Phase Usability Usefulness
Pre-Task Demographics, level of Daily task flow, pain points,
Questions expertise, prior experience. expectations, desires,
Goal: validating groups scenarios.
and classify results. Goal: set mindset for
usefulness evaluation of the
design
Tasks Pre-defined tasks, minimal Emphasis on participant-
moderator intrusion, directed tasks
Goal: find usability Goal: understand how proposed
problems, measure time design aligns with user needs
on task and completion
percentage.
Post-Task Level of satisfaction with Comparison with expectations
Questions the design, and points of and value. Task retrospectives.
confusion or ambiguity. Goal: discuss opportunities for
Goal: measure usability improvement.
30. Example: Usefulness
Goal:
Role of personalization
Pre-Task Questions:
Common tasks, pain
points
Tasks:
Emphasis on participant
direction
Post-Task Questions:
Comparison of
expectation
31. Summary
Foundational research is still important.
Usability testing is still important.
However, recognize when you have different goals and
adapt the research method as necessary.
32. Additional Information
Complete Presentation Slides
• http://www.slideshare.net/hawleymichael
• http://www.slideshare.net/banderlin
Contact Information
Michael Hawley Dan Berlin
mhawley@madpow.net dberlin@madpow.net
@hawleymichael @banderlin
Editor's Notes
Goal of usability testing is to find usability problems.
Transactional Applications
Transactional Applications
Transactional Applications
Content / Informational Sites
Content / Informational Sites
Content / Informational Sites
Gaming
Gaming
Gaming
Social Sites
Transactional Applications
Transactional Applications
Marketing / Persuasive
Marketing / Persuasive
Marketing / Persuasive
So when a client wants discount research to guide their UX strategy and wants to learn about the value and usefulness of a prototype or existing interface… what do we do? We turn to what we know – usability testing.
What strengths can we take from usability testing?How should the methodology evolve?What questions should we pose to elicit this information from participants?What is an acceptable prototype fidelity level?
Moderator and participant interact one-on-one (except for unmoderated) Three primary components:Pre-task questions: moderator interviews participant to clarify demographics, assess level of expertise, and gauge prior experienceTasks: participant proceeds through defined set of tasks with design artifact while “thinking aloud”. Moderator minimizes intrusion, identifies usability problems, and tracks metrics such as time on task and completion percentagePost-task questions: moderator asks participant about their experience with the tasks, level of satisfaction with the design, and points of confusion or ambiguityNo longer just determining demographics and level of experienceRather, need to set participant’s mindset for an evaluation of the design from a usefulness perspectiveBorrow from Ethnography, Contextual Inquiry, Interviewing and Laddering Daily task flow, current mechanisms Pain points Expectations, desires Ideal scenarios Other systemsEmphasis changes from contrived tasks to participant-directed tasks Tasks determined from pre-task and probing questions“Tell me how you would use this application in your daily routine. Okay, now please go do some of those on this prototype.”“You mentioned that you would need to determine your size – how would you do that?”Gather expectations at outset of the task “What would you expect to happen? What information would expect the page to contain?”Sets the basis for participant commentary (not that you should design exactly what they say, but it can inform ideas)This may not be a natural way to interact with the design, but we’re not testing usabilityFollow up questions emphasize expectations and value “How did completing this task compare with your expectations? Was it better or worse and why? What’s missing? What’s superfluous?” “How does it compare with other applications or systems?” “What other features or functions would you need when completing this task?”Participant directed to walk back through the task, responding to questions at key interaction pointsStrategic probing questions on salient components that interrupt or inform the task“What would make this more useful? How would you use this in your work/life?”“Does this table contain the columns that you need? Did the categories make sense?” Probe on areas that were not covered by the participant task“How would you get the detailed information about this product?"
Moderator and participant interact one-on-one (except for unmoderated) Three primary components:Pre-task questions: moderator interviews participant to clarify demographics, assess level of expertise, and gauge prior experienceTasks: participant proceeds through defined set of tasks with design artifact while “thinking aloud”. Moderator minimizes intrusion, identifies usability problems, and tracks metrics such as time on task and completion percentagePost-task questions: moderator asks participant about their experience with the tasks, level of satisfaction with the design, and points of confusion or ambiguityNo longer just determining demographics and level of experienceRather, need to set participant’s mindset for an evaluation of the design from a usefulness perspectiveBorrow from Ethnography, Contextual Inquiry, Interviewing and Laddering Daily task flow, current mechanisms Pain points Expectations, desires Ideal scenarios Other systemsEmphasis changes from contrived tasks to participant-directed tasks Tasks determined from pre-task and probing questions“Tell me how you would use this application in your daily routine. Okay, now please go do some of those on this prototype.”“You mentioned that you would need to determine your size – how would you do that?”Gather expectations at outset of the task “What would you expect to happen? What information would expect the page to contain?”Sets the basis for participant commentary (not that you should design exactly what they say, but it can inform ideas)This may not be a natural way to interact with the design, but we’re not testing usabilityFollow up questions emphasize expectations and value “How did completing this task compare with your expectations? Was it better or worse and why? What’s missing? What’s superfluous?” “How does it compare with other applications or systems?” “What other features or functions would you need when completing this task?”Participant directed to walk back through the task, responding to questions at key interaction pointsStrategic probing questions on salient components that interrupt or inform the task“What would make this more useful? How would you use this in your work/life?”“Does this table contain the columns that you need? Did the categories make sense?” Probe on areas that were not covered by the participant task“How would you get the detailed information about this product?"
Moderator and participant interact one-on-one (except for unmoderated) Three primary components:Pre-task questions: moderator interviews participant to clarify demographics, assess level of expertise, and gauge prior experienceTasks: participant proceeds through defined set of tasks with design artifact while “thinking aloud”. Moderator minimizes intrusion, identifies usability problems, and tracks metrics such as time on task and completion percentagePost-task questions: moderator asks participant about their experience with the tasks, level of satisfaction with the design, and points of confusion or ambiguityNo longer just determining demographics and level of experienceRather, need to set participant’s mindset for an evaluation of the design from a usefulness perspectiveBorrow from Ethnography, Contextual Inquiry, Interviewing and Laddering Daily task flow, current mechanisms Pain points Expectations, desires Ideal scenarios Other systemsEmphasis changes from contrived tasks to participant-directed tasks Tasks determined from pre-task and probing questions“Tell me how you would use this application in your daily routine. Okay, now please go do some of those on this prototype.”“You mentioned that you would need to determine your size – how would you do that?”Gather expectations at outset of the task “What would you expect to happen? What information would expect the page to contain?”Sets the basis for participant commentary (not that you should design exactly what they say, but it can inform ideas)This may not be a natural way to interact with the design, but we’re not testing usabilityFollow up questions emphasize expectations and value “How did completing this task compare with your expectations? Was it better or worse and why? What’s missing? What’s superfluous?” “How does it compare with other applications or systems?” “What other features or functions would you need when completing this task?”Participant directed to walk back through the task, responding to questions at key interaction pointsStrategic probing questions on salient components that interrupt or inform the task“What would make this more useful? How would you use this in your work/life?”“Does this table contain the columns that you need? Did the categories make sense?” Probe on areas that were not covered by the participant task“How would you get the detailed information about this product?"
Moderator and participant interact one-on-one (except for unmoderated) Three primary components:Pre-task questions: moderator interviews participant to clarify demographics, assess level of expertise, and gauge prior experienceTasks: participant proceeds through defined set of tasks with design artifact while “thinking aloud”. Moderator minimizes intrusion, identifies usability problems, and tracks metrics such as time on task and completion percentagePost-task questions: moderator asks participant about their experience with the tasks, level of satisfaction with the design, and points of confusion or ambiguityNo longer just determining demographics and level of experienceRather, need to set participant’s mindset for an evaluation of the design from a usefulness perspectiveBorrow from Ethnography, Contextual Inquiry, Interviewing and Laddering Daily task flow, current mechanisms Pain points Expectations, desires Ideal scenarios Other systemsEmphasis changes from contrived tasks to participant-directed tasks Tasks determined from pre-task and probing questions“Tell me how you would use this application in your daily routine. Okay, now please go do some of those on this prototype.”“You mentioned that you would need to determine your size – how would you do that?”Gather expectations at outset of the task “What would you expect to happen? What information would expect the page to contain?”Sets the basis for participant commentary (not that you should design exactly what they say, but it can inform ideas)This may not be a natural way to interact with the design, but we’re not testing usabilityFollow up questions emphasize expectations and value “How did completing this task compare with your expectations? Was it better or worse and why? What’s missing? What’s superfluous?” “How does it compare with other applications or systems?” “What other features or functions would you need when completing this task?”Participant directed to walk back through the task, responding to questions at key interaction pointsStrategic probing questions on salient components that interrupt or inform the task“What would make this more useful? How would you use this in your work/life?”“Does this table contain the columns that you need? Did the categories make sense?” Probe on areas that were not covered by the participant task“How would you get the detailed information about this product?"
Usability is still important!But, recognize when you have different goals and adjust accordinglyBe aware of nuances and differences between usability and usefulnessSet clear expectations and goals with project teamName it “usefulness study” rather than a “usability test”?Drawbacks: may need deeper prototypes, not getting at usability problems, Danger: asking the participant to help design it is easy to fall into, but not good