• Save
Best practices for defining, evaluating, & communicating Key Performance Indicators (KPIs) of user experience
Upcoming SlideShare
Loading in...5
×
 

Best practices for defining, evaluating, & communicating Key Performance Indicators (KPIs) of user experience

on

  • 2,278 views

Measuring user experience in a quantitative way has been on the rise in the user experience community in recent years. Quantifying user experience using KPIs has turned out to be a good way to ...

Measuring user experience in a quantitative way has been on the rise in the user experience community in recent years. Quantifying user experience using KPIs has turned out to be a good way to communicate and negotiate usability problems with product development team, product management team and especially business stakeholders to elevate the importance of usability. In this talk, we will discuss the following best-practices in the IBM Lotus user experience team, and would also like to have interactive discussions with other fellow user experience professionals: - List of most useful user experience KPIs at both task-level and system-level - Gathering KPIs through various user research methods, including not only large-scale unmoderated usability testing or surveys, but also small-scale usability testing, predictive human performance modeling (CogTool), and complexity analysis - Applying standard questionnaires such as SEQ(Single Ease Question), SUS (System Usability Scale) and NPS (Net Promoter Score) for benchmark comparison - Focusing on core tasks and top usability issues

Statistics

Views

Total Views
2,278
Views on SlideShare
2,277
Embed Views
1

Actions

Likes
6
Downloads
0
Comments
0

1 Embed 1

https://twitter.com 1

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    Best practices for defining, evaluating, & communicating Key Performance Indicators (KPIs) of user experience Best practices for defining, evaluating, & communicating Key Performance Indicators (KPIs) of user experience Presentation Transcript

    • Best Practices for Defining,Evaluating & Communicating KeyPerformance Indicators (KPIs) ofUser Experience Meng Yang User Experience Researcher IBM Software Group @IBMSocialBizUX © Copyright IBM Corporation 2012
    • Agenda Why measuring user experience? What user experience KPIs or metrics? Future work © Copyright IBM Corporation 2012
    • If you cannot measure it, you cannot improve it. -Lord Kelvin © Copyright IBM Corporation 2012
    • Design: Intuition-driven or data-driven?Reference: Metrics-driven design by Johua porter:r http://www.slideshare.net/andrew_null/metrics-driven-design-by-joshua-porter © Copyright IBM Corporation 2012
    • 5 Reasons why metrics are a designer’s best friend  Metrics reduce arguments based on opinion.  Metrics give you answers about what really works.  Metrics show you where you’re strong as a designer.  Metrics allow you to test anything you want.  Clients love metrics. Reference: Metrics-driven design by Joshua porter http://www.slideshare.net/andrew_null/metrics-driven-design-by-joshua-porter © Copyright IBM Corporation 2012
    • Stakeholders (executives/developers) love numbers! © Copyright IBM Corporation 2012
    • Lean start-up/Lean UX movement Reference: Lean Startup by Eric Ries http://theleanstartup.com/principles © Copyright IBM Corporation 2012
    • What make good metrics?  Actionable  Business alignment  Accessible  Honest assessment  Auditable  Consistency  Repeatability and  Powerful reproducibility  Low-cost  Actionability  Easy-to-use  Time-series tracking  Predictability  Peer comparability Reference: 2). Forrest Breyfogle’s book “Integrated 1). Eric Ries: Lean Start up: Enterprise Excellence Volume II: Business http://theleanstartup.com/ Deployment” © Copyright IBM Corporation 2012
    • User experience metrics used Task-level Product-levelTask success rate System Usability Scale (SUS) scoreTask easiness rating (SEQ) Net Promoter Score (NPS)Task error rateTask timeFirst click analysisHeat mapNumber of clicks                  Predicative human modeling(CogTool) task time and clicks foroptimal path © Copyright IBM Corporation 2012
    • System Usability Scale (SUS) Positive Version Why chosen?  Free, short, valid, and reliable. [1]  A single SUS score can be calculated and a grade can be assigned. [1]  Over 500 user studies to be compared with. [1]  The most sensitive post-study questionnaire. [2] References  [1] Jeff Sauros blog entry: Measuring Usability with the System Usability Scale (SUS) http://www.measuringusability.com/sus.p hp  [2] Book by Jeff Sauro & James Lewis: Quantifying the User Experience: Practical Statistics for User Research . http://www.amazon.com/Quantifying- User-Experience-Practical- Statistics/dp/0123849683/ref=sr_1_1? s=books&ie=UTF8&qid=1327605730&sr =1-1 © Copyright IBM Corporation 2012
    • SUS score calculation Calculation [1]  For odd items: subtract one from the user response.  For even-numbered items: subtract the user responses from 5  This scales all values from 0 to 4 (with four being the most positive response).  Add up the converted responses for each user and multiply that total by 2.5. This converts the range of possible values from 0 to 100 instead of from 0 to 40. Calculation package  Jeff Sauro provides a SUS guide & calculator package ($299.99 for a site liscense) http://www.measuringusability.com/products/SUS pack Reference Example SUS score: 66.7, level C  [1] Jeff Sauros blog entry: Measuring Usability with the System Usability Scale (SUS) http://www.measuringusability.com/sus.php © Copyright IBM Corporation 2012
    • Net Promoter Score (NPS) © Copyright IBM Corporation 2012
    • Task success rate Large-scale usability testing  Self-reported success in UserZoom Small-scale moderated usability testing  Through observation Example task-based unmoderated usability testing through UserZoom © Copyright IBM Corporation 2012
    • SEQ (Single Ease Question) Why chosen?  Reliable, sensitive & valid. [1]  Short, easy to respond to, easy to administer & easy to score [1]  The secondly most sensitive after-task questions, next to SMEQ, but much simpler. [2] References  [1] Jeff Sauros blog entry: If you could only ask one question, use this one http://www.measuringusability.com/blog/single-question.php  [2] Book by Jeff Sauro & James Lewis: Quantifying the User Experience: Practical Statistics for User Research . http://www.amazon.com/Quantifying-User-Experience-Practical- Statistics/dp/0123849683/ref=sr_1_1? s=books&ie=UTF8&qid=1327605730&sr=1-1 © Copyright IBM Corporation 2012
    • Example task performance table Fake data for illustration purposes Task performance data summary (% means percentage of all participants for each task) Successful with the Considered task Highlights of the problem Recommendation task easy Task 1 90% 95% Easy task None Task 2 61% 42% Hard to find the action button xx Task 3 55% 30% Too many clicks xx Task 4 85% 90% Relatively easy task xxTask performance data summary(% means percentage of all participants for each task) Task performance data summary Successful with the task Considered task easy (% means percentage of all participants for each task) Successful with the task Considered task easy Study 1 Study 2 Change Study 1 Study 2 Change Study 1 Study 2 Change Study 1 Study 2 Change Task 1 89% 84% -5.1% 60% 63% 3.0% Task 1 89% 84% -5.1% 60% 63% 3.0% Task 2 89% 70% -19.0% 65% 60% -5.2% Task 2 89% 70% -19.0% 65% 60% -5.2% Task 3 62% 55% -6.8% 75% 87% 12.0% Task 3 62% 55% -6.8% 75% 87% 12.0% Task 4 71% 90% 19.0% 56% 80% 24.0% Task 4 71% 90% 19.0% 56% 85% 29.0% Benchmark comparison between two studies on the same tasks © Copyright IBM Corporation 2012
    • Use cases and tasks dashboard Fake data for illustration purposes © Copyright IBM Corporation 2012
    • Clickstream data in large-scale user testings Good to have  Yet another way to visually illustrate the problems which are shown in other metrics such as task success rate and easiness ratings. But hard to implement & analyze  Approach 1: asking participants to install a plugin, which reduces the participation rate.  Approach 2: inserting a line of javascript code on every page of the website, which is hard to achieve. © Copyright IBM Corporation 2012
    • Task time and storyboard in CogTool What is CogTool  Produce a valid cognitive model predicting how long it will take a skilled user to complete a task.  Developed by Bonnie John from CMU (now at IBM Research). Pros  Free and easy to install.  Good way to get competitive data (task time).  Task visualization shows the most time-consuming task steps. Cons  Hard to learn at first.  Don’t address issues for novice users  Inter-rater validity and consistency Reference:  CogTool website http://cogtool.hcii.cs.cmu.edu/ © Copyright IBM Corporation 2012
    • Best-practices Focus on core use cases and top tasks. Use standardized questions/metrics for peer comparability. Don’t always need large-scale (unmoderated) usability studies to gather metrics. Visualization is the key to effective communication.  visual.ly is a good site to create infographics and visualization KPIs/metrics catch people’s attention, but qualitative information provides the insights. © Copyright IBM Corporation 2012
    • Future work Align UX metrics with business goals. Communicate UX metrics to Influence product strategy. Incorporate UX metrics in the agile development process. Collaborate with analytics team to gather metrics such as Engagement/Adoption/Retention. How do we measure usefulness (vs. ease of use)? © Copyright IBM Corporation 2012
    • © Copyright IBM Corporation 2012IBM Lotus Software550 King St.Littleton, MA 01460U.S.A.Produced in the United States of AmericaMay 2012All Rights ReservedIBM, the IBM logo and ibm.com are trademarks or registered trademarksof International Business Machines Corporation in the United States, othercountries, or both. If these and other IBM trademarked terms are markedon their first occurrence in this information with a trademark symbol (® or™), these symbols indicate U.S. registered or common law trademarksowned by IBM at the time this information was published. Such trademarksmay also be registered or common law trademarks in other countries. Acurrent list of IBM trademarks is available on the Web at “Copyright andtrademark information” at ibm.com/legal/copytrade.shtmlOther company, product and service names may be trademarks or servicemarks of others.References in this publication to IBM products and services do notimply that IBM intends to make them available in all countries in whichIBM operates. © Copyright IBM Corporation 2012