Workshop 1 (analysis and Presenting)

326 views

Published on

Evaluating User eXperience, performance metric - measuring efficiency

Published in: Education
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
326
On SlideShare
0
From Embeds
0
Number of Embeds
23
Actions
Shares
0
Downloads
2
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Workshop 1 (analysis and Presenting)

  1. 1. Analyzing and Presenting Performance metrics
  2. 2. Analyzing and Presenting Time-on-Task Data
  3. 3. Common way • TIME-ON-TASK is usually to measure – Efficiency • Look at the average amount of time spent on – A particular set of tasks. • Solution – Create ranges or discrete time intervals • To find patterns among participants – report the » frequency of participants per time interval » Find out who took long to finish and if they share common characteristics
  4. 4. Other solution • Thresholds – Set a time for completion • What matter is whether users can complete certain tasks in that time – calculate the percentage of users above or below the threshold • e.g. show the percentage of participants who completed each task in less than one minute. • Aims is to minimize the number of users who – need an excessive amount of time to complete a task.
  5. 5. Issue to know • Look at all tasks or just the successful tasks? – Successful tasks • provide cleaner measure of efficiency; • but no information on Unsuccessful performance – Without that you can (Unsuccessful performance) • Measure can be less accurate • It can be harder to reflection of the overall user experience – Solution • Use only the times for the successful tasks • And errors for unsuccessful task • Shall we use a think-aloud protocol? – This helps you contextualize your data and not to misinterpret the data • But should not influence the time-to-task measurement – Solution • you can ask participants to hold most of comment to the between tasks period • Shall we tell participants that time is being measured? – Not directly
  6. 6. Analyzing and Presenting Number of mistakes
  7. 7. Errors • Or mistakes – Help to understand possible usability issues – how a specific action or set of actions • result in task failure • Errors vs usability issues – A usability issue is the underlying cause of a problem; and – a error is the outcome
  8. 8. Errors can tell • How usable something really is. – Number of mistakes made, • How they were made, – Type and frequency of errors and it correlation with • Product design – Loss in efficiency • e.g. filling in a form and a error results in lost of time to complete the task • e.g. influence cost effectiveness or influence task failure
  9. 9. Common way • Organize by task – Define the correct set of actions to do the task • Define the correct and incorrect possible number of actions • Then… – Collect number or errors • By user and by task • How to collect – Observe the participant • During a lab study • During a video record
  10. 10. Error analysis • Look at the frequency of the error for each task – calculate the average number of errors made by each participant for each task. – frequency of errors for each task. • This helps you to – find out which task is associated with the most errors – Tell in which (task) participants made more error • From that results you get an idea what were the most significant usability issues.
  11. 11. Other approach • If your concern is not with how participants perform a specific task – but about how participants performed overall • Then… – Find out the overall error rate for the study. • By averaging the error rates for each task into a single error rate
  12. 12. Issues to know • Don’t double-count the errors – Double-counting happens when you assign more than one error to the same event • Test the error counting before start the procedure and • Define a clearly what errors you need to count • If you need to know not just error rate but – Also why this errors occurs then… • You need to add a code name to each possible error
  13. 13. Analyzing and Presenting Number of steps taken
  14. 14. Number of steps taken • TASK-SETPS also measure – Efficiency • Efficiency metrics should be concerned not only – With the time-to-task, but also – With the amount of cognitive and physical effort involved
  15. 15. How to measure - Common Way • Identify the action(s) to be measured – More actions taken VS more effort is involved • Type of efforts – Cognitive » involves finding the right place to perform an action – Physical » Involves the physical activity required to take action
  16. 16. How to identify- Common Way • Identify the action(s) to be measured – Define the start and end of an action • Count the actions – Easier way is to count while participants are doing the task – If not possible use video recordings • Actions should be meaningful – represent an increase in cognitive and/or physical effort » More action more effort • Look only at successful tasks
  17. 17. Analysis • Calculate an average for each task (by participant) – to see how many actions are taken – This will help you to identify which task • Requires the most amount of effort • Another way is to use a method called Lostness
  18. 18. Combine of metrics • Task success and time-on-task to measure – Efficiency • How – Ratio of the task completion to the task time in minutes
  19. 19. To read • Measuring the User Experience.pdf – Dropbox folder • Workshops/W1 • Have a look at – CHAPTER 4 • Pages 63-92

×