WUD Chicago: It Makes $ense

   The User Research Practice
    in my experience at Orbitz
Why do we test?
   Bad design = lost transactions = lost revenue


• To predict adoption of new features—what is
  worth building?
• To understand audience view of technology
• To predict business performance of a design
• To evaluate our performance against others
Overview of Methods

                                                        When in
       Method                 Insights gained           Lifecycle
Roundtables            Subjective                    Strategy


Focus groups           Subjective                    Ideation


Usability Tests        Subjective and quantitative   Any time


Surveys (automated     Quantitative                  Any time
or traditional)
Automated methods      Quantitative                  Post-launch
(log analysis, etc.)
Performing the usability test
• One+ design variants of a new feature
• Existing features versus redesigns
• Ability for users to either identify or
  successfully complete a given task
• Bad design = dissatisfied users
• Designers may not moderate tests on
  their own designs
Interpreting the results
• Often, team members will observe a few
  participants and jump to conclusions about
  the results
• Easy for non-practitioners to assume they are
  drawing the same conclusions a usability
  specialist would—we’re co-workers, right?
• It is essential to maintain a balanced opinion
  in the face of questioning by business
  members, for whom features = revenue
Communicating Findings
• Need to highlight the findings as they pertain
  to the project which initiated the study
• Must make sure that things that are observed
  by users about your interface are actually
  acted upon or considered
• In other words, it’s not just that you prove
  your hypothesis, but you must report and
  attempt to spur action on findings unrelated to
  it if they impair usability
When Practice and Theory Collide

• No matter what choices are made and
  how results are acted upon:
  – Remember, you are actually testing an
    hypothesis, not trying to have your model
    (or others’) chosen by users
  – You can continue to monitor performance
    via other methods after launch
  – Don’t accept broken windows

World Usability Day 2005 • User Research at Orbitz

  • 1.
    WUD Chicago: ItMakes $ense The User Research Practice in my experience at Orbitz
  • 2.
    Why do wetest? Bad design = lost transactions = lost revenue • To predict adoption of new features—what is worth building? • To understand audience view of technology • To predict business performance of a design • To evaluate our performance against others
  • 3.
    Overview of Methods When in Method Insights gained Lifecycle Roundtables Subjective Strategy Focus groups Subjective Ideation Usability Tests Subjective and quantitative Any time Surveys (automated Quantitative Any time or traditional) Automated methods Quantitative Post-launch (log analysis, etc.)
  • 4.
    Performing the usabilitytest • One+ design variants of a new feature • Existing features versus redesigns • Ability for users to either identify or successfully complete a given task • Bad design = dissatisfied users • Designers may not moderate tests on their own designs
  • 5.
    Interpreting the results •Often, team members will observe a few participants and jump to conclusions about the results • Easy for non-practitioners to assume they are drawing the same conclusions a usability specialist would—we’re co-workers, right? • It is essential to maintain a balanced opinion in the face of questioning by business members, for whom features = revenue
  • 6.
    Communicating Findings • Needto highlight the findings as they pertain to the project which initiated the study • Must make sure that things that are observed by users about your interface are actually acted upon or considered • In other words, it’s not just that you prove your hypothesis, but you must report and attempt to spur action on findings unrelated to it if they impair usability
  • 7.
    When Practice andTheory Collide • No matter what choices are made and how results are acted upon: – Remember, you are actually testing an hypothesis, not trying to have your model (or others’) chosen by users – You can continue to monitor performance via other methods after launch – Don’t accept broken windows