World Usability Day 2005 • User Research at Orbitz

1,215 views

Published on

This is my presentation from World Usability Day 2005 in Chicago. Thanks to Dayna Bateman for asking me to participate!

Published in: Design, Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
1,215
On SlideShare
0
From Embeds
0
Number of Embeds
487
Actions
Shares
0
Downloads
5
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

World Usability Day 2005 • User Research at Orbitz

  1. 1. WUD Chicago: It Makes $ense The User Research Practice in my experience at Orbitz
  2. 2. Why do we test? Bad design = lost transactions = lost revenue • To predict adoption of new features—what is worth building? • To understand audience view of technology • To predict business performance of a design • To evaluate our performance against others
  3. 3. Overview of Methods When in Method Insights gained Lifecycle Roundtables Subjective Strategy Focus groups Subjective Ideation Usability Tests Subjective and quantitative Any time Surveys (automated Quantitative Any time or traditional) Automated methods Quantitative Post-launch (log analysis, etc.)
  4. 4. Performing the usability test • One+ design variants of a new feature • Existing features versus redesigns • Ability for users to either identify or successfully complete a given task • Bad design = dissatisfied users • Designers may not moderate tests on their own designs
  5. 5. Interpreting the results • Often, team members will observe a few participants and jump to conclusions about the results • Easy for non-practitioners to assume they are drawing the same conclusions a usability specialist would—we’re co-workers, right? • It is essential to maintain a balanced opinion in the face of questioning by business members, for whom features = revenue
  6. 6. Communicating Findings • Need to highlight the findings as they pertain to the project which initiated the study • Must make sure that things that are observed by users about your interface are actually acted upon or considered • In other words, it’s not just that you prove your hypothesis, but you must report and attempt to spur action on findings unrelated to it if they impair usability
  7. 7. When Practice and Theory Collide • No matter what choices are made and how results are acted upon: – Remember, you are actually testing an hypothesis, not trying to have your model (or others’) chosen by users – You can continue to monitor performance via other methods after launch – Don’t accept broken windows

×