The document describes a platform called Remotest that allows researchers to conduct remote user studies on the web. It allows researchers to define experiments, tasks, and data collection. A case study analyzed interaction data from 16 participants conducting navigation tasks on a website. Analysis of cursor movement trajectories, speed, and accuracy found differences between input methods and abilities. The platform and analysis methods support remote accessibility research by characterizing user interactions.
4. Introduction
User behaviour interacting with the Web has been extensively
studied
Researchers need to define
Objectives
Stimuli
Tasks
Procedure
Interaction data analysis is a tedious task
Interact 2015 - Bamberg: Accessibility 32015/09/17
5. Introduction
Involving an appropriate number of participants is a challenge
Location
Timing of sessions
Remote user testing facilitates the conduction of experiments
It facilitates involving a larger number of participants
Allows more naturalistic observations
The computer already adapted
Assistive technologies
Special configurations
Interact 2015 - Bamberg: Accessibility 42015/09/17
7. Remotest Platform I
Some requirements for inclusive remotes user tools (Power et al. 2009)
Participant
Record demographic data
Specify the technology used
Select the trials
Researcher
Provide features to test customized and “real” websites
Define tasks for a set of users
Specify a set of questions to the user, before and/or after the task has been
completed
Provide instructions and training documents for each trial
Interact 2015 - Bamberg: Accessibility 62015/09/17
8. Remotest Platform II
Allows defining accessible web experiments, manage
experimental remote/in-situ sessions and analyse the interaction
data
Hybrid architecture:
Server
Experimenter Module (EXm)
Coordinator Module (COm)
Results Viewer Module (RVm)
Client
Participant Module (PAm)
Interact 2015 - Bamberg: Accessibility 72015/09/17
9. Remotest: Experimenter Module (EXm)
Define an accessible web experiment in 5 easy steps
Step 1: Specify the experiment type
Step 2: Define each task
Step 3: Defining the procedure
Step 4: Specify data to be gathered
Step 5: Select the sample
Interact 2015 - Bamberg: Accessibility 82015/09/17
Based on XML
Experiment Specification Language
10. Remotest: Coordinator Module (COm)
Manages the experiments defined by experimenters
applying features of EXm.
Automatically Creates
Questionnaires
Task description
Alerts
Tasks Sequence
Events NoSQL
Interact 2015 - Bamberg: Accessibility 92015/09/17
Experimental Session Controller Language
Based on XML
11. Remotest: Participant Module (PAm)
Implemented as an add-on for Firefox browser
Based on mostly on JavaScript, XML and HTML
Easy migration to other platforms
Experimental Session Controller Language States
Presents the stimuli
Presents Alerts
Detects when the task ends
Manually
Automatically
Sends the data gathered to the COm
Interact 2015 - Bamberg: Accessibility 102015/09/17
12. Remotest: Results Viewer Module (RVm)
Presents the interaction data
gathered
Automatically calculated
measures:
Rapidity measures
Time on page, cursor average
speed and cursor acceleration
measures
Accuracy measures
Trajectory distance, curvature index
(CI), ratio from start end position
amplitude and start target center
amplitude
Interact 2015 - Bamberg: Accessibility 112015/09/17
14. Remotest: RVm – Pointing trajectories
Starts when the user decides to move the cursor to reach a target
Trajectories are compounded by sub-movements separated by short pauses
Controlled laboratory experiments can specify restricted interactions making the
beginning of cursor movement explicit
Naturalistic settings with untagged web interactions do not permit the register of
any explicit trace of the cognitive process behind the users’ intention
Heuristics are needed to estimate the beginning of the aimed movements
Interact 2015 - Bamberg: Accessibility 132015/09/17
15. Remotest: RVm – Pointing trajectories II
Delimitate the beginning
First cursor move event in page candidate
Page scroll occurs candidate is rejected
Pause or Stop
Valid pauses during aimed movements vary among users
Stop threshold calculation for each user
Duration of intervals where speed=0
Median value and quartiles of all intervals duration
Interval duration < (Median + 2 quartile deviation) pause
else stop
Stop candidate rejected
End of movement
Click event + page load event
Interact 2015 - Bamberg: Accessibility 142015/09/17
16. Remotest: RVm – Pointing trajectories III
Interact 2015 - Bamberg: Accessibility 152015/09/17
(a) Joystick
(b) Keyboard only with
a head pointer
18. Case Study
Objective:
Analyse different navigation strategies
Participants:
16 Participants
11 People with upper-body physical impairments (U01-U11)
5 Able-bodied people (U12-U16)
Location
7 Experimental sessions were carried out at the Elkartu
6 Participants carried out the experimental session in a laboratory of the
Computer Science School at the University of the Basque Country
3 At their home
Interact 2015 - Bamberg: Accessibility 172015/09/17
19. Case Study: Tasks
The experiment consisted of three tasks:
Filling in a questionnaire about demographic
data (Task 1)
Free navigation task with 5 minutes duration
(Task 2)
Searching for a target task with a maximum
duration of 10 minutes (Task 3)
Interact 2015 - Bamberg: Accessibility 182015/09/17
20. Case Study: Defining The Experiment
Step 1: Specify the type of experiment. web navigation
Step 2: Determine the tasks and stimuli of the experimental sessions.
Task 1: Guided process for defining the questions and possible responses
Task 2: Task duration, task description text, URL of the website, task completion text
Task 3: Similar as Task 2 + End URL
Step 3: Define the procedure of the experimental sessions
Step 4: Specify interaction data to be gathered
Step 5: Select the sample
Interact 2015 - Bamberg: Accessibility 192015/09/17
21. Case Study: Interaction data analysis
Results:
Cursor movement characterization with the interaction data gathered in the Task 3
Pointing trajectories
Speed
Curvature Index
A total number of 323 web pages were visited by participants.
133 Web pages were selected
23 Web pages were excluded
167 Web pages were removed
Participants with fewer than 5 analyzed pages were excluded:
U4, U5, U6, U7, U13.
U11 Was excluded from the analysis due to fact that she decided to leave the experimental session
Interact 2015 - Bamberg: Accessibility 202015/09/17
22. Case Study: Interaction data analysis IV
Pauses median
Keyboard only users
needed more time
Head pointer user
needed more
Mouse users needed
less time but close to
joystick or trackball
users.
0
500
1000
1500
2000
2500
3000
3500
u01 u02 u03 u08 u09 u10 u12 u14 u15 u16
Interact 2015 - Bamberg: Accessibility 212015/09/17
Keyboard only Able-bodied
Mouse
Trackball
Joystick
Time (ms)
24. Case Study: Interaction data analysis II
Speed
Median values of cursor
speed automatically
calculated
Considerable difference
in speed
Able-bodied Highest
Keyboard only users
Lowest
0
50
100
150
200
250
u01 u02 u03 u08 u09 u10 u12 u14 u15 u16
Interact 2015 - Bamberg: Accessibility 232015/09/17
Keyboard
only
Able-bodied
Mouse
Trackball
Joystick
Speed (pixels/ms)
25. Case Study: Interaction data analysis III
Curvature Index
U02 Worst
Keyboard only Best
0
0.5
1
1.5
2
2.5
3
u01 u02 u03 u08 u09 u10 u12 u14 u15 u16
Interact 2015 - Bamberg: Accessibility 242015/09/17
Keyboard
only
Able-bodied
Mouse
Trackball
Joystick
26. Case Study: Interaction data analysis V
Videos from physically impaired
users were analyzed
Automatic time for pointing <
Values obtained from videos
Kendall's Concordance Tests
0.73 (p = 0.055)
Exists some relation in rankings
APM (ms) MPM (ms)
U01 2318 6440
U02 2314.6 5080
U03 4289.5 3240
U08 5107.6 7190
U09 5728.7 8570
U10 6349.2 20480
Automatically VS Video Analysis
Interact 2015 - Bamberg: Accessibility 252015/09/17
27. Case Study: Discussion
The results automatically obtained are useful for
characterizing the cursor movements and detecting profiles
Curvature index, cursor speed and the time to clicking on a
target assists researchers in detecting problems
It would be possible even if the experiment are carried out in
remote settings
The algorithm is useful for ranking purposes
Interact 2015 - Bamberg: Accessibility 262015/09/17
29. Conclusions
The RemoTest platform supports experimenters throughout the entire process
The experiment definition features consider the specification of different kind of
studies
studies on predetermined web tasks or free navigation tasks
comparative studies on navigational strategies
accessibility-in-use evaluations
web surveys
A straightforward visualization of each participant interaction data
Heuristic estimations in order to obtain pointing trajectory-related measures that
enable further understandings
Interact 2015 - Bamberg: Accessibility 282015/09/17
First, I will explain our motivation for creating Remotest; a tool to perform accessible web users studies remotely or in situ.
After that, I will introduce the system, an we will see how the experiments are defined and analyzed with a case study,
Finally, conclusions are goinn mmmmmmmmmng to be presented.
User behaviour when interacting with the Web has been extensively studied. Numerous accessibility, usability or related studies can be found in the literature.
Experiments have to be carefully planned in order to obtain meaningful results. A fault in the design could lead to an erroneous interpretation of results.
So researchers have to clearly define the objectives of experiments.
With the objective in mind experimenters have to define the stimuli and the tasks to be performed on it. Then it has to be decided If the tasks should be counterbalance, if different groups exists... or in other words the experimental procedure.
Once all the participants have completed the tests, the analysis of all the data gathered is needed to find significant results.
This can be a tedious task, even more when the analysis is done working with video recordings. Researchers are required to watch a video over and over again to find meaningful interaction events or data and annotate them. This can be very time consuming even more when the sample size or number of tasks are big.
Involving the appropriate number of participants could be a problem.
Sometimes users need to travel to the laboratory that is not always near their home.
Or there are not much users with the required characteristics for the study.
The rigorous timing of the sessions is also a problem, not always is easy to match researchers’ and participants’ agendas.
These problems are even bigger when people with disabilities are part of the study. For instance, for a blind person could be very challenging traveling to a place that have never been before.
These are some of the reasons why there is an increasing interest on performing web experiments remotely. It facilitates involving a larger number of participants. Users are not required to travel. In addition they can carry out the test whenever they like.
Regarding the people with disabilities, this kind of tools allow to perform more naturalistic observations. The study can be carried out at their home with their own device or computer.
A clear advantage is that they already have their computer adapted to their needs or preferences. It is not always possible to reproduce users` configuration or assistive technology in a laboratory.
Power et al, listed a number of requirements that a remote user tool should meet, to conduct experimental sessions with people with disabilities. This classification have been used to design and implement the remotest tooll.
There are different requirements. Regarding the participant and researcher perspective.
From the Participants perspective, the tool should provide methods to enable them record their demographic data, specify the assistive technology used during the testing and they also should be able to select the trials they want to perform.
From the researcher perspective the tool should provide methods to test mock-up websites or real websites. Researchers should be able to define a set of tasks for a set of users. Like defining which tasks should be performed by each user depending on the age or disability.
The tool should provide methods to define pre tasks or post task questionnaires. That would be useful to gather the user satisfaction, emotions and before or after an treatment..
The tool should be also capable of creating accessible tasks instructions, training tasks… This would allow the user familiarize with the type of tasks, making sure that the user has understood the aim of the experiment or how to proceed to complete the task...
The remotest platform provides methods to define, manage and analyse the web experimental sessions, in a laboratory or in remote locations.
Remotest follows an hybrid architecture approach. Those modules that are related to the experimenter are located in a server, like the experimenter module, coordinator module and the result viewer module.
On the contrary the participant module is located on user’s computer.
Now I’m going to introduce you each module
The experimenter module is located in the server and It has been designed as a web application in order to facilitate access from different devices, locations and researchers more efficiently,
With the experimenter module the researcher can define a accessible web experiment in 5 easy steps.
The first step is to define the type of the experiment, If it is a web experiment, is a survey or mixed.
Then the desired tasks or questionnaires have to be defined. The systems guides the experimenter during the process, asking for the necessary information to create accessible questionnaires and tasks descriptions.
In the third step the procedure is defined. The experimenter has to decide if the experiment is a within subject or a between groups experiment.
Depending of the selection made, different methods of counterbalence are provided by the tool, like the random, latin-square..
In the forth step the data to be gathered is selected, four options exist: browser events, like opening a tab, contextual menus; cursor events; keyboard events and the http requests.
In the last step all the information related with participants is addressed, New users or characteristics can be added, to the data base, from which the desired sample has to be selected.
If different groups exists, the system check if they are balanced or not and provides different methods to assign users from the sample to the groups.
The information gathered in the all steps , is converted to an XML based language called Experiment Specification Language and is send to the Coordinator module
The coordinator module interprets the experiment specification language and creates all the necessary stimuli, questionnaires, always ensuring the accessibility of the created elements.
Among that it also creates the personalized experimental sessions for each user.
So it has to counterbalance the stimulus for each user or group, based on the options gathered in the experimenter module. Then stores all necessary information in a data base.
When a request is received from a participant module, it converts the experimental personalized session to another XML based language called Experimental Session Controller Language.
In addition, is also responsible of storing the data gathered from the experimental sessions in a NoSQL data base.
The participant module as it has been developed as a firefox addon and has to be installed on the users computer.
Most of the code is based on JavaScript, XML and HTML l so it should be easy to convert it to a chrome addon or even to a proxy tool.
This module interprets the experimental session controller language and creates the necessary states to guides the user during the experimental session
Then It presents the task instructions, stimuli, alerts and also detects when a task ends. Three options are available currently to detect the end of a task, reaching to a target, clicking on an element or using a button provided by the tool to let the user explicitly set the end.
This module is also in charge of gathering the events and send them to the COm.
All the events are send using the HTTPS protocol asynchronously without interfering in the user navigation experience.
Finally the result viewer module deals with the presentation of the data gathered.
It automatically calculates measures like the time on page, cursor speed or acceleration that are catalogued as rapidity measures.
It also calculates some accuracy measures like the trajectory distance, the curvature index which is used to compare the optimal path with the path followed by the user.
An the relation between the distance from the cursor moving starting point to the click position and the distance form the cursor moving starting point to the target center.
It is possible to compare graphically, the measures in a concrete web page or tasks, with the other users. But it is also possible to compare the performance of one user in different tasks, or web pages.
Some measures like the speed, acceleration, distance to target or the distance to target from the intention time can be interpreted graphically.
In the speed and aceleration charts can be seen a user using the numeric keypad and a head pointer pointing device.
It can be seen how it produces similar speed and acceleration peaks that are followed by short pauses, that are used by the user to change the direction of the cursor, pressing another key or trying to reduce the speed when arriving to the target.
At the bottom we can se how a user, that is using a oversized trackball have some difficulties once is in the nearbies of the target. These graphs take into account the events gathered in the whole page.
But in the pointing trajectory graph the same can be seen but from user intention.
When measures related to cursor movement are wanted to be calculated is important to only take into account the moment the user has decided to click on to a target. The user could be moving the cursor while reading, coud have a phone call…
How are calculated the pointing trajectories is explained in the following slides.
A pointing trajectory or aimed movement starts when the user decides to move the cursor to a target.
Aimed movements consist of several sub-movements separated by pauses
A pause may correspond to a submovement transition or if it is big enough to a trajectory segmentation.
In laboratory experiments restricted interactions could be created. Like in the typical Fitts law tests.
In this kind of test you know that the cursor movements are aimed movements, but in more naturalistic settings, like navigating on the web is not so clear. So heuristics are needed to estimate the beginning of such movements.
Unlike other studies, where a unique pause duration was set for all the users, we decided to calculate a threshold value for each user. This value varies within users since this value changes depending on the pointing device used; its not the same to use a joystick, a mouse or a mouskeys.
And even if the same input device is used, the user ability to control his input device varies.
So we took all the interval duration which the cursor was stopped and we calculated the median value and their quartile deviations.
So when cursor speed was 0, for a duration that was two quartile deviations o more from the median, the candidate was rejected as starting point.
This process ends when a click is found with a posterior page load, so the candidate is save as starting point of the users’ intention,
We have considered the first cursor move event recorded as the beginning of the aimed movement candidate.
Each time a page scroll occurred, the candidate (if it exists) is rejected as a intention starting point. A new candidate is set when a new cursor movement event is triggered.
These graphs represent the distance of the cursor to target for a entire page.
Here can be appreciated how different pauses values were used for each user to determine the pointing trajectory.
For instance, in the upper side graph corresponds to a person that uses a joystick as a pointing device.
This users does not need to make big pauses to change or correct the path of the cursor to reach to the target. So the threshold calculated to him is not big, so the first stop, discarded the first candidate.
On the down side it can be appreciated how the pauses used by the user are much bigger than the previous one.
This user was using the numeric keypad as a pointing device so she needed more time to modify the path of the cursor.
So the threshold calculated to her was bigger than for the other user.
If the threshold value was fixed for all users, the pauses used for this user would cut the intention time, considering only the last part as the aimed movement which would be inadequate.
Or on the contrary with a higher value it would take the entire page as the aimed movement for the joystick user
A case study was performed with 16 participants. 5 of them were able body people and 11 were people with upper body physical impairments
The objetive was to analyse the different navigation patterns assisted by the remotest platform
The test was carried out in different locations, 7 were carried out in Elkartu, an association of people with physical impairments. 6 were conducted in our laboratory and the other 3 at their own home.
The experiment consisted on three tasks.
First they had to answer a questionnaire about their demographic data, like age, experience with the web, computers...
Then they have a 5 minute free navigation task that was devoted to let the users know a little bit about the web site.
After they were familiarized with the web site, they had to make a search in with a 10 minutes time limit.
Now I will explain how was the process of defining the experiment
First the experiment type as a web navigation was selected,
Then the tasks were created.
First the questionnaire was created with the help of the Exm. Remotest asked for necessary attributes to create an accessible questionnaire.
Then a web task was created entering a the title, description, the task duration and the starting URL.
The next task was created similarly with the only difference of the task duration which And the target url, where the objective was located.
Then procedure of the experiment was defined. In this case all had the same sequence so the task sequence was set manually.
After that, the data to be gathered was selected.
Finally the participants were selected.
After that Com created the questionnaire and the tasks description web pages.
Then Pam was installed on the users computer. Users entered a given user name and the participant module presented the stimuli according to the XML created by the Com for the given user.
When all the data was collected, the results were analysed with the reviewer module.
In this case, the cursor speed, ci and time for pointing were analysed from the interaction data gathered in the task3.
From the total of the 323 visited web pages, only 133 were selected.
23 pages were excluded since those pages did not have any cursor movement on it. This is explained by the fact that the were the result of pressing the the back button repeatedly.
The other 167 pages were removed because the mouse was out the web browser, a new page was loaded or due to already corrected problem with iframes.
The the participants with less than 5 pages were excluded since the data gathered may be not accurate enough.
The user 11 was also excluded since she decided to leave the experiment before finishing the task. She was using a head pointer device, and was too tired to continue.
Here the the motionless intervals median can be seen.
This values with the 2 quartiles deviations was used as the threshold to separate the pauses from the cursors stops.
As can be appreciated the keyboard only users needed notably more time than others. As they need some time to press the correct key for changing the cursor direction.
.
The u10 that was using a head pointer device needed even more time than the other two.
Mouse users obtained the smallest values but they were quite close to joystick or trackball users.
In this slide the time needed to point a target and click it can be appreciated.
It can be appreciated how users using the numeric key pad needed more time for pointing.
Once again the user using the head wand needed more time than the other two numeric key pad users.
The user that needed more time to click was the user2, whereas the u14 had the lowest value.
U2 was using an oversized trackball and he needed more time to click due to the position of the buttons he need to separate his hand from the ball and move the hand to press the button.
Able-bodied participants (U12,U14,U15,U16) obtained the better values for both measures (mean value for clicking is 855.32ms and mean value for pointing is 1931.41ms) than participants with physical impairments (mean value for clicking is 1872.53ms and mean value for pointing is 4351.27ms).
Regarding the speed, it can be appreciated how clearly the able bodied participants using the mouse as pointing device were faster than the other users.
The slowest users where the numeric key pad users.
Cambiar keyboard only=>keyboard only, copiar los textos
This can be due to the discrete movements of device used but also by the pauses needed to change the key pressed to redirect the cursor to the desired target.
cambiar colores segun usuarios. Able body y numeric keypad
As commented previously the CI measures the relation of the optimal path and the path followed by the user.
I high value means that there is a lack of precision in cursor movements. The worst value was achieved by the user 2. A person with cerebral palsy using an oversized trackball.
Users using the numeric keypad had the best values. Thddde numeric key pad produces linear and more precise movements
U01 that in had the slowest speed from those that were using the mouse as pointing device had good results in the CI.
He moved the mouse slowly but accurately
In order to validate the pointing trajectories automatically calculated by the tool, videos from physically impaired users were analysed to calculate the time they needed to point a target.
As can be appreciated the values obtained by the tool are lower than the results obtained manually.
One possible explanation could be that more pages were used to calculate the measures in the video analysis
Another explanation could be that, even watching a the video is not easy to decide when the user has started moving the cursor to the desired target, even two persons could have different opinions.
So to analyse the agreement between the rankings, we performed a kendalls concordance.
The kendall coeficent was 0.73 with a p value of 0.055 meaning that exist some kind of relation between the rankings.
As we have been seen the obtained results automatically , are useful to characterize the cursor movements and to detect different profiles. (keyboard only for instance)
CI speed and the time needed to click could assist the researchers detecting problematic situations.
Thank to the use of the algorithm that tries to detect the aimed movements, it is possible to detect problems even if the experiments are performed in a remote settings.
The algorithm has been proven as useful for ranking purposes. The results are not so accurate compared with the results obtained manually, but as said before, this can be due the lack of data or the difficulties assigning the start of the aimed movements, even watching the videos.
As a conclusion is should be highlighted that the remotest platform supports the experimenter through the whole of an accessible web experiment process : in the design, in the performing the experiment and analysing the data gathered.
The use of heuristics, enables to discard data that could confuse the experimenter or alter the results.
With the tool different studies could be carried out like studying the user behaviour in some web tasks, comparative studies on navigational studies, and also it could be used to detect accessibility problems in a webpage.
The tool, provides a visualization of the data gathered in an understandable way, that could used to detect problems at a glance, reducing the time needed to analyse the videos when are available.
The use of heuristic estimations to obtain pointing trajectory related measures, provides further understating of participants behaviours. (datos in the wild, malos, se pueden malinterpretar, o no salir nada)
Future work will be focused on adding new performance measures to the tools, like the number of times the path crosses the target x and y axis.
We also want to improve the algorithm finding new parameters to better estimate the aimed movements that would improve the results given by the tool.
Finally we would like to perform other user studies with other groups of impaired users that would help us understand better the navigation strategies used and the problems they face while are navigating on the web.
Before entering in to details, I will explain briefly the different type of architectures available for remote user testing tools.
There are 3 options regarding the place where the tool is installed, server side, proxy side and client side tools.
Server side tools are the less intrusive since the user does not need to install anything. The test could be performed even without the knowledge of the user. //se pueden poner ejemplos de A/B testing automaticos
Its main drawback is that we can only study the website we own or those we have access to the server.
On the contrary, proxy side tools that are located between the user and the web server and can be used with any website. These tools inject the necessary code to the visited pages to track the user while Is navigating.
Usually this kind of tools require the user to configure their browser, to enable the proxy to track the visited web pages.
Finally, client side tools can be used with any website. These tools are able to gather all the events generated by the user and the browser like the usage of back button, right click context menu, booth mark a webpage, printitn…
That is why we think that client side tools are the best option for researchers. But it also has some problems, for instance users are required to install a software on their computers. This could be overcome providing them clear instructions about the installation process. For instance we asked users to install the tool before test and all of them could install it regardless their previous experience with computers or disability.
So, as can be seen only one tool, uzilla, meets all the requirements. Anyway we haven't found any information about its accessibility, nor for questionnaires or the instructions neither from the tool it self.
Most of the server side tools, are only valid for carrying out unmoderated user tests, and do not provide methods to select the tasks. And as said before only owned websites can be tested.
The proxy tools available in the literature, were more devoted to free navigation like tasks. For example, letting the user navigate during a week. But this tools, are not valid when a concrete task is wanted to be performed. There is no an instruction page, where the user is told what to do.
In the other hand, the client tools created by gajos an hurst, where more devoted to identify aimed movements in the wild or to perform a machine learning study to detect or identify user characteristics with cursor movements.
Our tool instead meets all this requirements. And Among of this free navigation like tasks, more guided user test tasks can be also performed.
In the second step the experimenter has to define tasks or the questionnaires.
Actually 2 types of web tasks can be defined, free navigation tasks or search tasks. The free navigation tasks could be time limited and could have a starting url. Search tasks have the same attributes with the difference that they also have attributes to determine the end of a task.
A description of the task is also needed for both type of tasks.
Regarding the questionnaires most of the question types are covered by remotest. Open questions, closed, likert scales and ranges.
The system asks the researcher for the necessary attributes to enable the creation of accessible questionnaires or information pages, like for instance helping tips.
In this step the experimenter have to define the dependencies between the different tasks if they exist.
For example the researcher could define a questionnaire before and after a task to measure for instance the user satisfaction.
In the third step the procedure is defined.
First it has to decided if the experiment is a within subjects or between subjects study.
For the within subjects different options of counterbalance exists, manual, random, Latin square or rotation.
For the between groups is needed to specify how many groups exist.
Currently only two options of counter balance are available for the between subjects type of experiments.
Random and manual.
If the experimenter decides to select the manual option, it has to decide the task sequence for each user group. Otherwise is randomly assigned by the tool.
In the forth step the data to be gathered is selected.
It has been divided in four groups,
browser events, like opening a tab, using back or forward button, contextual menus, scroll….
The events related cursor, like the movements or selection, clicks, wheel usage, hovers. In additon the size and position of the hovered or click element is also gathered.
Then we have the keyboard related events for like the key down, up and pressed.
Then we have the server side logs like http requests.
In the last step all the information related with participants is addressed.
New users or characteristics can be added, to the data base, from which the desired sample has to be selected.
If the experiment is a between subject study, different methods exist to assign a user from the sample to a group.
Manually, random or establishing a criteria like the gender, age or the assistive technology used.
The system checks if the groups are balanced. If they groups are not equal remotest asks the researcher to fix it. The experimenter could then add new participants to the sample or remove users from groups.
The information gathered in the all steps , is converted to an XML based language called Experiment Specification Language and is send to the Coordinator module