KPI-UX-BostonUPA-20120507

2,883 views
2,672 views

Published on

Published in: Technology, Business
1 Comment
11 Likes
Statistics
Notes
  • Hello - this is a great presentation. Would you consider presenting it a local Interaction Design Foundation Meeting in Boston on January 13th?
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here
No Downloads
Views
Total views
2,883
On SlideShare
0
From Embeds
0
Number of Embeds
136
Actions
Shares
0
Downloads
113
Comments
1
Likes
11
Embeds 0
No embeds

No notes for slide
  • UXR working in IBM Lotus for 7 years. Working on various products including Notes, ST, SmartCloud, but mostly social computing. In recent years, we have more focus on quantitative user experience research. Conducting large-scale unmoderated usability studies, and large-scale surveys. And also build our dashboard. Good communication tool. Still do lots of iterative usability testings, and user interviews. More triangulation of the quantitative and qualitative research.
  • Lord Kelvin was a physicist Qualitative vs. quantiative data in UX.
  • Art or science? Apple vs. Google…
  • Importance of measurement here
  • Thought-leaders embraces metrics.
  • Thought-leaders embraces metrics.
  • Thought-leaders embraces metrics.
  • KPI-UX-BostonUPA-20120507

    1. 1. Best Practices for Defining, Evaluating & Communicating Key Performance Indicators (KPIs) of User Experience Meng Yang User Experience Researcher IBM Software Group
    2. 2. Agenda •  Why measuring user experience •  What user experience KPIs or metrics to use •  How to communicate user experience metrics •  Best practices and future work
    3. 3. If you cannot measure it, you cannot improve it. -Lord Kelvin
    4. 4. Design: intuition-driven or data-driven? Reference: Metrics-driven design by Joshua porter http://www.slideshare.net/andrew_null/metrics-driven-design-by-joshua-porter
    5. 5. 5 Reasons why metrics are a designer’s best friend •  Metrics reduce arguments based on opinion. •  Metrics give you answers about what really works. •  Metrics show you where you’re strong as a designer. •  Metrics allow you to test anything you want. •  Clients or stakeholders love metrics. Reference: Metrics-driven design by Joshua porter in Mar. 2011 http://www.slideshare.net/andrew_null/metrics-driven-design-by-joshua-porter
    6. 6. 7 Ingredients of a successful UX strategy •  Business strategy •  Competitive benchmarking •  Web analytics •  Behavioral segmentation, or personas, and usage scenarios •  Interaction modeling •  Prioritization of new features and functionality •  Social / mobile / local Reference: Paul Bryan’s article on UXmatters in October, 2011. http://www.uxmatters.com/mt/archives/2011/10/7-ingredients-of-a-successful-ux-strategy.php
    7. 7. Lean start-up/lean UX movement Reference:: Lean Startup by Eric Ries http://theleanstartup.com/principles
    8. 8. Agenda •  Why measuring user experience •  What user experience KPIs or metrics to use •  How to communicate user experience metrics •  Best practices and future work
    9. 9. Characteristics of good metrics •  Actionable [1] •  Business alignment [2] •  Accessible •  Honest assessment •  Auditable •  Consistency •  Powerful •  Repeatability and •  Low-cost reproducibility •  Easy-to-use •  Actionability •  Time-series tracking •  Predictability •  Peer comparability Reference: [1]. Book by Eric Ries: Lean Start up : http://theleanstartup.com/ [2]. Book by Forrest Breyfogle: “Integrated Enterprise Excellence Volume II: Business Deployment”
    10. 10. User experience metrics used Task-level Product-levelTask success rate System Usability Scale (SUS) scoreTask easiness rating (SEQ) Net Promoter Score (NPS)Task error rateTask timeClickstream data - First click analysis - Heat map - Number of clicks(CogTool) task time and clicks foroptimal path
    11. 11. System Usability Scale (SUS) •  Why chosen? •  The most sensitive post-study questionnaire. [2] •  Free, short, valid, and reliable. [1] •  A single SUS score can be calculated and a grade can be assigned. [1] •  Over 500 user studies to be compared with. [1] •  Chose positive version because no significant differences found with mixed version but easier for people to answer. [2] •  Customized levels for color coding •  Level 0 (poor) : Less than 63 •  Level 1(minimally acceptable): 63-72 [63 is the lower range of C-] •  Level 2 (good): 73 - 78 [73 is the lower range of B-] •  Level 3 (Excellent): 79 - 100 [79 is the lower range of A-] Reference: [1] Jeff Sauros blog entry: Measuring Usability with the System Usability Scale (SUS) http://www.measuringusability.com/sus.php [2] Book by Jeff Sauro & James Lewis: Quantifying the User Experience: Practical Statistics for User Research.
    12. 12. Net Promoter Score (NPS) •  Why chosen? •  Industry standard and widely popular. •  Benchmark data to compare. •  Customized levels •  Level 0 (Poor): Less than 24% •  Level 1 (Minimally acceptable): 24% - 45% [24% is the lower range for computer software] •  Level 2 (Good): 46% - 67% [46 is the mean for computer software] •  Level 3 (Excellent): 68% - 100% [68% is the upper range for computer software Reference: [1] NPS website: http://www.netpromoter.com/why-net-promoter/know/
    13. 13. Task success rate •  Why chosen? •  Easy to collect •  Easy to understand •  Popular among the UX community [1] •  Customized levels •  Fail : less than 75% •  Pass: 75% or more Reference: [1] Jacob Nielsen’s article on usability metrics in Jan. 2001: http://www.useit.com/alertbox/20010121.html
    14. 14. Task ease of use: SEQ (Single Ease Question) •  Why chosen? •  Reliable, sensitive & valid. [1] •  Short, easy to respond to, easy to administer & easy to score [1] •  The secondly most sensitive after-task questions, next to SMEQ, but much simpler. [2] •  Customized levels for color coding •  Fail : less than 75% •  Pass: 75% or more Reference: [1] Jeff Sauros blog entry: If you could only ask one question, use this one http://www.measuringusability.com/blog/single-question.php [2] Book by Jeff Sauro & James Lewis: Quantifying the User Experience: Practical Statistics for User Research
    15. 15. Summary of user experience metrics chosen Customized Success Metrics Definition Why chosen Methods criteria •  easy to collect •  large scale percentage of tasks usability testings •  Fail : less than 75%Task success •  easy to understand that users complete •  small scale •  Pass: 75% or more rate •  popular among the UX successfully usability testings community •  reliable, sensitive & valid •  short, easy to respond to, easy •  large scale one standard Single usability testings •  Fail : less than 75%Task ease of to administer & easy to score Ease Question (SEQ) •  small scale •  Pass: 75% or more use •  the second most sensitive after- task questions, next to SMEQ, usability testings but much simpler •  Poor : Less than 24% •  large scale •  Minimally acceptable : one standard •  industry standard and widelyNet Promoter usability testings 24% - 45% recommendation popularScore (NPS) •  large scale •  Good : 46% - 67% question •  benchmark data to compare surveys •  Excellent : 68% - 100% •  free, short, valid, and reliable. •  Poor : Less than 63 •  a single SUS score can be •  Minimally acceptable : a list of 10 standard calculated and a grade can be •  large scale 63-72 System ease of use questions assigned. usability testings •  Good : 73 – 78 Usability (positive version) •  over 500 user studies to be •  large scale •  Excellent : 79 - 100Scale (SUS) compared with. surveys •  the most sensitive post-study questionnaire.
    16. 16. Clickstream data •  Good to have •  Yet another way to visually illustrate the problems which are shown in other metrics such as task success rate and easiness ratings. •  Navigation path is very helpful to have. •  But hard to implement & analyze in UserZoom •  Approach 1: asking participants to install a plugin, which reduces the participation rate. •  Approach 2: inserting a line of javascript code on every page of the website, which is hard to achieve.
    17. 17. Task time •  Good for benchmark comparison •  between prototypes/releases. •  with competitors. •  But hard to be accurate •  For large-scale studies, people might be multi-tasking. •  For small-scale studies, people might be asked to think aloud. •  You don’t know how hard people have tried the task.
    18. 18. Agenda •  Why measuring user experience •  What user experience KPIs or metrics to use •  How to communicate user experience metrics •  Best practices and future work
    19. 19. Example task performance table Example task performance table Fake data for illustration purposes Task performance data summary (% means percentage of all participants for each task) Successful with the Considered task Highlights of the problem Recommendation task easy Task 1 90% 95% Easy task None Task 2 61% 42% Hard to find the action button xx Task 3 55% 30% Too many clicks xx Task 4 85% 90% Relatively easy task xxTask performance data summary(% means percentage of all participants for each task) Task performance data summary Successful with the task Considered task easy (% means percentage of all participants for each task) Successful with the task Considered task easy Study 1 Study 2 Change Study 1 Study 2 Change Study 1 Study 2 Change Study 1 Study 2 Change Task 1 89% 84% -5.1% 60% 63% 3.0% Task 1 89% 84% -5.1% 60% 63% 3.0% Task 2 89% 70% -19.0% 65% 60% -5.2% Task 2 89% 70% -19.0% 65% 60% -5.2% Task 3 62% 55% -6.8% 75% 87% 12.0% Task 3 62% 55% -6.8% 75% 87% 12.0% Task 4 71% 90% 19.0% 56% 80% 24.0% Task 4 71% 90% 19.0% 56% 85% 29.0% Benchmark comparison between two studies on the same tasks © Copyright IBM Corporation 2012
    20. 20. User experience scorecard examples High priority but poor usability tasks should be the focus area Core use cases that has the most failed tasks should be the focus Fake data for illustration purposes
    21. 21. Illustrated task flow by task performance data Fake data for illustration purposes
    22. 22. Portfolio dashboard and health index Details on how to calculate the health index
    23. 23. User story mapping (in exploration) Reference: Incorporate tasks and metrics into the agile [1] J User story mapping presentation by Steve Rogalsky development process http://www.slideshare.net/SteveRogalsky/user-story-mapping-11507966
    24. 24. Agenda •  Why measuring user experience •  What user experience KPIs or metrics to use •  How to communicate user experience metrics •  Best practices and future work
    25. 25. Best practices •  Great executive buy-in on the user experience metrics and dashboard. •  Focus on core use cases and top tasks to evaluate. •  Use standardized questions/metrics for peer comparability. •  Try random sampling instead of convenient sampling when recruiting participants for large-scale usability testings. •  Visualization is the key to effective communication. •  KPIs/metrics catch people’s attention, but qualitative information provides the insights.
    26. 26. Future work •  Align UX metrics with business goals. •  User experience vs. customer experience •  Apply metrics on interaction models and scenarios. •  Communicate UX metrics to influence product strategy. •  Incorporate UX metrics in the agile development process. •  Collaborate with analytics team to gather metrics such as Engagement/Adoption/Retention and cohort analysis. •  How do we measure usefulness (vs. ease of use)?
    27. 27. Thank You! Questions

    ×