This paper discusses challenges in contextual task analysis and the need of tools that support analysts to collect such information in context. Specifically we argue that the analysis of collaborative and distributed tasks can be supported by ambulatory assessment tools. We illustrate how contextual task analysis can be supported by TEMPEST, a platform originally created for experience sampling and more generally, longitudinal ambulatory assessment studies. We present a case study that illustrates the extent to which this tool meets the needs of real-world task analysis, describing the gains in efficiency it can provide but also directions for the development of tool support for task analysis.
CNIC Information System with Pakdata Cf In Pakistan
Towards Task Analysis Tool Support
1.
2. Towards Task Analysis Tool Support
Suzanne Kieffer1
Nikolaos Batalas2 Panos Markopoulos2
1Université catholique de Louvain
Louvain School of Management
Louvain-la-Neuve, Belgium
2Eindhoven University of Technology
Industrial Design
Eindhoven, The Netherlands
3. Task Analysis
User goals, tasks and
work environment
User errors,
breakdowns in the task
and workarounds
4. Task Analysis
Usability Goals Setting
Work Reengineering
User Interface Design
Other
Usability Engineering Tasks
5. Data collection
Face-to-face interaction
User observation
Note taking
Audio/video recording and transcribing
Task Analysis remains resource intensive
6.
7. Room for improvement
Analyst efficiency
Analyst workload
User time and effort
In situ data collection
Ambulatory Assessment methods
8. Ambulatory Assessment (AmA)
Purpose: to assess the ongoing behaviour,
knowledge and experience of people during task
execution in their natural setting
Examples: experience sampling, repeated-entry
diaries, ecological momentary assessment,
acquisition of ambient signals
9. To which extent can AmA methods
support in situ data collection during
task analysis procedures?
10. Method
1. Task model hypothesis
Analysis of procedures and artefacts
Setting of questions and experimental design
2. Tool-supported in situ data collection
Users: expertise and responsibility
Tasks: frequency, criticality and complexity
Problems and errors
3. Contextual observations/interviews
17. Setting of questions
Q1. Please indicate your degree of
familiarity with this task
Q2. How frequently is this task
executed?
Q3. Please indicate when it was
executed for the last time
Q4. Please indicate when it will be
executed next time
Q5. Please select all the possible
contexts where it takes place
Q6. Why does it have to be
executed?
Q7. Please indicate a mean to facilitate
or to improve this task
Q8. Please give an example of
possible problem during its execution
Q9. Please give an example of error
committed during its execution
Q10. Please select in the list all the
participants to this task
Q11. Please indicate who asks for its
execution
Q12. Please indicate to whom the
related result is communicated
18. Experimental setup
30
items
4
key users
3
shifts
12
questions
12 participants
29 items x 12 questions
+ 1 question
4200 questions 350 questions per participant
9 days 40 questions a day per participant
21. TEMPEST
1. Prepare your material (questions and protocol)
2. Program sequences of questions
3. Create participants
4. Fire questions
5. Analyze answers
25. Challenge
Unfriendly work environment
Complex work organization
Collaborative
Distributed in space and time
Rotating shifts
26. With vs. without tool-support
With TEMPEST Without TEMPEST
Analyst’s
Efficiency
Increased productivity
Increased accuracy
Limited productivity
Risk of mistakes
Analyst’s
Workload
Automated & remote
Safe & comfortable
Structured process
Manual & face-to-face
Difficult & tedious
Unstructured process
User’s
time & effort
38 hours overall in 9 days
20 minutes a day per user
36 hours overall (estimation)
3 hours per user (estimation)
Questions
Timely with snooze option
Rather not intrusive
Disruptive
Intrusive
Answers Complete results Fragmented results
27. Requirements
Supporting tools
Analyst configurability
Real-time monitoring and traceability of responses
On the fly adaptation of the sampling protocol
Data collection across platform (responsiveness)
Task model hypothesis
Guidelines for analysts
Mapping with the sampling protocol
Mapping with the responses
28. Take away
Task Analysis Tool Support (TATS)
Method and TEMPEST
Feasibility and cost-efficiency of TATS
Requirements for conducting TATS
32. Convergences
Reasons to execute a task (Q6): instructions,
cleanliness and quality
Means to improve the tasks (Q7): automation,
better care of the zinc bath and new equipment
Problems (Q8): technical problems and accidents
Errors (Q9): related to manipulation of the zinc
bath and lack of time
33.
34.
35. “The questions interfered with my schedule”
Satisfaction questionnaire, 5-item Likert scale
Shift A=3.50, equally distributed between “neutral” and “agree”
Shift B=2.67
Shift C=2.25
Most of the participants (10/12) thought they
answered between 15 and 30 questions a day,
while they actually answered about 40