Measuring User Experience

570 views

Published on

Slides from a sharing session about measurement of User Experience through usability metrics

Published in: Design
0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
570
On SlideShare
0
From Embeds
0
Number of Embeds
6
Actions
Shares
0
Downloads
1
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide

Measuring User Experience

  1. 1. MEASURING USER EXPERIENCEAlan Ho01 March 2012
  2. 2. 2 minutes 1 Definition of User Experience (UX)10 minutes 2 How to Measure UX? 3 minutes 3 Why Should We Measure UX?12 minutes 4 Examples of UX Measures 1 minute 5 Challenges 1 minute 6 Conclusion
  3. 3. Definition of User Experience1 2 3 4 5 6
  4. 4. T he extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use.
  5. 5. E ncompasses all aspects of the end-users interaction with the company, its services, and its products.
  6. 6. Usability = User Experience ≠
  7. 7. How to Measure UX?1 2 3 4 5 6
  8. 8. Noooo... My head hurts!
  9. 9. Behavioral Combined Self- and and Live Issues-Based Reported Physiological Comparative Application Card-SortingUsability Study Scenario Task Success Task Time Errors Efficiency Learnability Metrics Metrics Metrics Metrics Metrics DataCompleting a transaction      Comparing products    Evaluating frequent use of the sameproduct     Evaluating navigation and/orinformation architecture   Increasing awareness   Problem discovery   Maximizing usability for a criticalproduct  Creating an overall positive userexperience  Evaluating the impact of subtlechanges Comparing alternative designs    
  10. 10. 1. Too much time to collect2. Cost too much money3. Not useful when focusing on small improvements4. Don’t help us understand causes5. Data are too noisy6. You can just trust your gut7. They don’t apply to new products8. None exist for the type of issues we are dealing with9. Cannot be understood or appreciated by management10. Difficult to collect reliable data with a small sample size - Adapted from T. Tullis, et al., “Top Ten Myths about Usability”
  11. 11. Why Should We Measure UX? 1 2 3 4 5 6
  12. 12. It‟s high time we knowWHERE ARE (WE) NOW. and where the heck WE„RE HEADING TO eventually.
  13. 13. User Experience What is the current level of UX? # of Releases
  14. 14. User Experience Where were we? Any idea? # of Releases
  15. 15. User Experience Where are we heading? # of Releases
  16. 16. Examples of UX Measures1 2 3 4 5 6
  17. 17. Participant Task Time Descriptive Statistics 1 34s Mean 35.08 2 33s Standard Error 3.25 3 28s Median 33.50 4 Mode 22.00 44s Standard Deviation 11.24 5 46s Sample Variance 126.45 Population mean time (@95% confidence) 6 21s Kurtosis -1.32 between 27.94s and 42.22s 7 22s Skewness 0.25 8 Range 32.00 53s Minimum 21.00 9 22s Maximum 53.00 10 29s Sum 421.00 11 39s Count 12.00 12 Confidence Level(95.0%) 7.14 50s
  18. 18. Observation Expert Time Novice Time 80 T1 34 45 70 T2 33 48 T3 28 53 60 T4 44 66 50 T5 46 67 40 Expert Time T6 21 35 Novice Time T7 22 39 30 T8 53 21 20 T9 22 34 T10 29 55 10 T11 39 59 0 T12 50 70 1 2 3 4 5 6 7 8 9 10 11 12
  19. 19. Observation Expert Time Novice Time Descriptive Statistics T1 34 45 Expert Time Novice Time T2 33 48 Mean 35.08 49.33 T3 28 53 Variance 126.45 229.70 T4 44 66 Observations 12.00 12.00 Pooled Variance 178.07 T5 46 67 Hypothesized Mean Difference 0.00 T6 21 35 df 22.00 T7 22 39 t Stat -2.62 T8 53 21 P(T<=t) one-tail 0.01 T9 22 34 t Critical one-tail 1.72 Statistical Significant Difference P(T<=t) two-tail 0.02 T10 29 55 t Critical two-tail 2.07 T11 39 59 T12 50 70
  20. 20. Completion 90% Rate Task Time PercentTask Percentage (mins) Efficiency 80% 1 65% 1.5 43% 70% 60% 2 67% 1.4 48% 50% 3 40% 2.1 19% 40% 4 74% 1.7 44% 30% 5 85% 1.2 71% 20% 6 90% 1.4 64% 10% 7 49% 2.1 23% 0% 8 33% 1.3 25% Task 1 Task 2 Task 3 Task 4 Task 5 Task 6 Task 7 Task 8
  21. 21. 60% 50%50% 45% 40%40%30% 25% 20%20%10%0% Task 1 Task 2 Task 3 Task 4 Task 5 Design A
  22. 22. 90% 80%80% 75% 70%70% 60% 60% 60%60% 50% 50%50% 45% 40%40% 35% 30%30% 25% 25% 20%20%10%0% 1 2 3 4 5 Design A Design B Design C
  23. 23. 1.65 1.6Tasks Successfully Completed per Minute 1.55 1.5 1.45 1.4 1.35 1.3 1.25 1.2 Prototype 1 Prototype 2 Prototype 3 Prototype 4
  24. 24. Participant Task 1 Task 2 Task 3 Task 4 Task 5 Did Not Did Not Did Not 1 Complete Complete Complete Complete Complete Did Not Did Not 2 Complete Complete Complete Complete Complete 120% 3 Complete Complete Complete Complete Complete 4 Complete Complete Complete Complete Complete 100% 92% Did Not Did Not 83% 5 Complete Complete Complete Complete Complete 75% Task 1 80% Did Not Did Not 6 Complete Complete Complete 67% Task 2 Complete Complete Did Not 60% Task 3 7 Complete Complete Complete Complete Complete Did Not Did Not Did Not 42% Task 4 8 Complete Complete Complete Complete Complete 40% Did Not Did Not Task 5 9 Complete Complete Complete Complete Complete 20% 10 Complete Complete Complete Complete Complete Did Not 11 Complete Complete Complete Complete Complete 0% Did Not 12 Complete Complete Complete Complete Complete SuccessPercentage 67% 42% 92% 75% 83%
  25. 25. Months ofParticipant Experience Errors per Day Correlation Coefficient: -0.76 1 6 4 2 6 6 10 3 8 3 9 4 Average Errors per Day 5 7 8 5 4 3 7 6 6 12 1 5 7 11 3 Errors 4 8 1 9 Linear (Errors) 3 9 9 4 2 R² = 0.58 10 8 2 1 11 0 4 6 1 6 11 12 2 5 Months of Experience
  26. 26. 60 50 40Time-on-Task (sec) 30 Design 1 20 10 0 Trial 1 Trial 2 Trial 3 Trial 4 Trial 5 Trial 6 Trial 7
  27. 27. 90 80 70 60Time-on-Task (sec) 50 Design 1 Design 2 40 Design 3 30 20 10 0 Trial 1 Trial 2 Trial 3 Trial 4 Trial 5 Trial 6 Trial 7
  28. 28. Participant Design A SUS Design B SUS 100 1 80 48 90 2 88 55 80 3 70 76 53 4 60 90 80 50 System Usability Scale 5 93 81 40 6 67 51 Design A_SUS 30 7 68 61 Design B_SUS 20 8 55 41 10 9 77 55 0 10 71 57 11 88 59 12 80 44
  29. 29. Participant Design A SUS Design B SUS Descriptive Statistics 1 80 48 Design A_SUS Design B_SUS 2 88 55 Mean 77.75 57.08 3 76 53 Variance 125.48 153.72 4 Observations 12.00 12.00 90 80 Pearson Correlation 0.65 5 93 81 Hypothesized Mean Difference 0.00 6 67 51 df 11.00 7 68 61 t Stat 7.23 8 P(T<=t) one-tail 8.44E-06 55 41 t Critical one-tail 1.80 9 77 55 P(T<=t) two-tail 1.69E-05 10 71 57 t Critical two-tail 2.20 11 88 59 12 80 44 Statistical Significant Difference
  30. 30. Usefulness 90%Usefulness = 90% 80% 70%Satisfaction = 50% 60%Ease of Use = 45% 50% 40%Ease of Learning = 40% 30% 20% 10% Satisfaction 0% Ease of Use Ease of Learning
  31. 31. 7 Promoters Untouchables Tasks 6 1Average Experience Rating 2 5 3 4 4 5 Opportunities Fixers 6 3 7 8 2 9 10 1 11 1 2 3 4 5 6 7 Average Expectation Rating
  32. 32. 16 14Number of Unique Usability Issues 12 10 8 6 4 2 0 Design 1 Design 2 Design 3
  33. 33. 16 14Number of Unique Usability Issues 12 10 Low 8 Medium High 6 4 2 0 Design 1 Design 2 Design 3
  34. 34. Challenges1 2 3 4 5 6
  35. 35. ••••••
  36. 36. ? ? ? = = =Poor Fair Good Excellent
  37. 37. Conclusion1 2 3 4 5 6
  38. 38. 1. Usability is NOT UX2. But UX can be measured thru usability metrics3. UX measurement can be used to infer design appropriateness4. Any form of measures is better than none5. Proper metrics and measurement method is key to success6. We need to start ASAP
  39. 39. The End

×