SlideShare a Scribd company logo
1 of 8
Download to read offline
Interactively Linked Eye Tracking Visualizations
Wietske de Bondt, Bartjan Henkemans, Jeroen Lamme, Gijs Pennings, Lászlo Roovers, and Michael Burch
Fig. 1. Eye clouds visualization of Bordeaux (left) and heatmap visualization of Warschau (right) of the metro maps dataset [23].
Abstract—In this report, Argus, a tool for generating visualizations for eye tracking data is presented. There are numerous ways to
visually present eye tracking data: heatmaps, scanpath, gaze stripes, eye clouds and AOI transition diagrams to name a few. On top
of that, there are multiple ways to interact with these visualizations like selecting users, stimuli and fixation points to compare these
features between the different visualizations. All of the aforementioned visualizations and interaction techniques are implemented into
this tool. This report describes these visualizations and interactions including their advantages and disadvantages and how they are
used in understanding eye tracking data. Furthermore, the report also looks at the structure of the dataset, how the tool runs on a
server, how data is stored and the design philosophy of the website. Finally, the tool is previewed by means of an application example
and the performance and limitations are discussed.
Index Terms—Eye Tracking, Information visualization, User Interaction
1 INTRODUCTION
Eye tracking is getting more and more attention, and not without reason.
Insights that follow from eye tracking can be applied in several research
fields, ranging from psychology and education to sports analysis and
electrical engineering [15,16,18]. Advanced analytics are required for
researchers to understand a great amount of data. In the broad field
of eye tracking visualizations, one can easily lose sight. Blascheck et
al. [3] have made a taxonomy for eye tracking visualizations, which
helps when deciding on what visualizations to make.
Although there are many eye tracking visualization tools already
known, little attention has been given to the linking of these. This
is especially important for data analysts, since interaction techniques
provide a way to change the visualizations until they find new insights.
Moreover, providing these visualizations in an online environment
allows researchers to easily share their visualizations without installing
proprietary software. Yi et al. [28] already have researched seven
different interaction techniques, on top of which our research builds by
implementing all of these interaction techniques in a web-based tool.
The primary aim of this research project was to gain useful in-
• Wietske de Bondt (1442880), e-mail: w.p.d.bondt@student.tue.nl.
• Bartjan Henkemans (1414976), e-mail: b.henkemans@student.tue.nl.
• Jeroen Lamme (1443062), e-mail: j.s.k.lamme@student.tue.nl.
• Gijs Pennings (1441388), e-mail: g.p.s.pennings@student.tue.nl.
• Lászlo Roovers (1439251), e-mail: l.roovers@student.tue.nl.
sights into eye movement data by providing multiple web-based eye
movement visualizations, which are interactively linked. This paper
describes the implementation of a heatmap, a scanpath visualization,
a transition diagram, eye clouds and a gaze stripes visualization in
a multiple coordinated view [24]. The latter implies that multiple
visualizations can be seen in one overview.
We implemented our visualization tool using JavaScript as a pro-
gramming language with Node.js as a runtime environment for the
back-end, and D3.js to implement our visualizations in the front-end.
Furthermore, we used a MySQL database to store and retrieve data. In
this way, the interactive graphics can be seen in a web browser. This
also allows other data analysts to upload their data and gain insights
by seeing visualizations in several perspectives. The usefulness of
our web-based visualization tool is illustrated by applying it to eye
movement data of public transport maps, used in another eye tracking
study [23].
2 RELATED WORK
Eye tracking visualizations consist of both a representation component
as well as an interactive component. Concerning the representation
part, there can be made a distinction between point-based and AOI-
based (short for ‘area of interest’) visualizations. On the one hand,
point-based visualizations use x- and y-coordinates of fixations op-
tionally together with time related information. On the other hand,
AOI-based visualizations use extra information about the data. Namely,
they define AOIs, which are areas or objects of interests on a stimulus.
AOIs need to either be defined by researchers themselves or with the
help of clustering algorithms. However, there is not one clear guide-
line for choosing AOIs, which makes AOIs less objective. Therefore,
AOI-based visualizations are more difficult to realize, which has been
addressed by Hessels et al. [17].
Several well-known point-based visualization techniques will now
be described. Bojko [4] describes the usage of heatmaps and how they
should be handled with care. The main advantages of heatmaps are
that the color representation of a heatmap is very intuitive and that the
heatmap is shown in the same figure as the stimulus itself such that
little mental effort is needed to interpret a heatmap. A disadvantage of a
heatmap is that it could hide the details of the stimuli [26]. In addition,
a heatmap only shows the aggregation over time of visual attention and
neglects the time axis in its visualization. A time-preserving visual
attention map would have been a different approach to make sure the
time aspect is implemented [6]. Another visualization which takes
the time aspect into consideration is the scanpath visualization [25].
In this visualization, scanpaths of all users are shown over the actual
stimulus. Hence, again, little mental effort is required to see relevant
connections. However, a downside to this visualization is that visual
clutter is inevitable with an increasing number of users, fixations and
saccades.
Gaze stripes and their use are described by Kurzhals et al. [21]. A
gaze stripes visualization shows a timeline with cropped images from
the used stimulus. The first advantage of this visualization is that it
becomes easy to recognize common patterns in scanpaths because of
the ordered timelines. Secondly, gaze stripes have a time component
included which gives the visualization a very clear temporal overview.
Unfortunately, with larger sample sizes it will become difficult to see
all the data on one screen.
Burch et al. [12] describe the usage of an eye cloud visualization.
The visualization is based on thumbnails which grow in size the longer
a fixation lasts. The main two advantages of an eye cloud are that, first
of all, the areas that are focused on for the longest continuous period of
time are easily noticed, and secondly, that the most commonly fixated
upon areas are easily distinguished. One disadvantage of an eye cloud
is that it has no temporal overview. Another is that when studying a
dataset with a lot of fixations visual clutter is inevitable.
An example of what can be done with AOIs is looking at transitions
between AOIs. Transitions between AOIs are defined as a saccadic
movement between two AOIs [3]. Kurzhals and Weiskopf [22] have
looked into AOI transition trees. This visualization shows objects of in-
terests and identifies patterns between transitions of objects. Moreover,
Burch and Timmermans [11] describe the Sankey technique, which is
another approach for visualizing AOI transitions. The main advantage
of AOI transition visualizations is that a clear overview is given of how
there has been looked at AOIs. Such as can be seen in a study of how
newspapers are read [19]. However, disadvantages of AOI transition
visualizations are that AOIs need to be defined correctly and that there
is no temporal aspect.
Concerning the interaction component of visualizations [1,2,7–10,
13,14,27], seven main interaction techniques between the user and a
visualization system are provided in Yi et al. [28]. These categories
are: select, explore, reconfigure, encode, abstract/elaborate, filter, and
connect. An elaboration of this can be found in Sect. 4.4. In order to
ease the creation of visualizations, several programming libraries have
been developed. An example of this is D3.js, as is documented in [5].
3 DATA MODEL AND PROCESSING
In this section, we will cover the data format and how uploaded files
are parsed. First, we will define some important terms. Next, we will
discuss the exact format of the dataset. Finally, we will explain how
the data is parsed.
3.1 Definitions
In eye tracking experiments, participants are shown visual content. In
this paper, we will refer to this content as the stimulus. In experiments,
participants are either wearing eye tracking glasses or an eye tracker
is mounted close to or integrated into, for example, a monitor. In both
cases, they record where participants are looking dozens of times per
second. These so-called gaze points are then aggregated based on
area and time to fixations, which are defined by their position and
duration [3].
3.2 Data Model
The data for our visualization tool must be uploaded as a ZIP archive.
The structure and format were inspired by Netzel et al. [23]. All files
inside are searched until we find a folder named ‘stimuli’; the images
inside are stored to use them for the visualizations. The first comma-
separated values file (CSV) we find is parsed as described below.
In the CSV file, fields must be separated by tabs and records by
newlines. There are eight columns in total. Each record represents a
fixation of some user (‘user’) for some stimulus (‘StimuliName’) at
some time (‘Timestamp’). The columns ‘FixationDuration’, ‘Mapped-
FixationPointX’, and ‘MappedFixationPointY’ describe the duration
and position of the fixations respectively. Only the stimulus and user
columns contain text; the other four columns contain numerical data.
The remaining two columns (‘FixationIndex’ and ‘description’) are not
used for visualizations in our tool.
3.3 Data Parsing
Before the data can be used to create visualizations, the CSV file is
processed. A unique 4-character ID is generated which consists of
digits and lowercase letters, which allows the back-end to store and
identify different datasets. Also, researchers can use this ID to request
their previously uploaded data. The first line of the CSV file is assumed
to contain the column names. The ‘FixationIndex’ and ‘description’
columns of the CSV file are discarded since they do not contain informa-
tion we use while generating visualizations. Then, in the ‘StimuliName’
column, encoding errors are fixed (e.g. ‘ü’ is replaced by ‘ü’) and all
diacritics (accents) are replaced by the corresponding ‘bare’ letter (e.g.
‘ü’ is replaced by ‘u’) to prevent any future encoding and displaying
errors. Finally, using the ‘FixationDuration’ column, the ‘Timestamp’
column is recalculated such that it starts at 0 for every user interact-
ing with some stimulus. Then, the dataset is loaded into the database
together with its newly assigned ID.
There are some checks in place to make sure uploaded data is in
the right format. For instance, if a record does not contain the correct
number of fields, parsing is terminated. However, not all aspects are
checked. For example, the first line containing the column names is
skipped completely. This implies the columns must be provided in the
correct order since the parser will not adapt. Other (edge) cases are
touched on in the discussion.
4 VISUALIZATION TOOL: ARGUS
In this section, we will discuss what tools and frameworks we used
for the back-end and the front-end, and why we made these choices.
We will also explain the layout of and design choices for the graphical
user interface. Lastly, we will list the eye tracking visualizations and
interaction techniques we implemented.
4.1 Back-end Architecture
Naturally, to host our web application we required a server. However,
due to the nature of our project, a server that serves static (i.e. simple
unchanging text) web pages would not suffice. We needed a server that
supported file uploads and that could parse and store data in a database,
in addition to serving interactive web pages.
4.1.1 Server
We chose an off-site server from DigitalOcean with 1 virtual CPU,
3 gigabytes of random access memory, and 25 gigabytes of storage
space. This server’s operating system is a 64-bit version of Ubuntu
20.04 (LTS) on which Node.js and a MySQL server runs.
As this project will see sporadic use and the graphs are rendered in
the front-end, hence utilizing the processing power of the user and not
that of the server, a more powerful server is not necessary. Currently, a
dataset of roughly 120,000 entries is uploaded, parsed, stored and then
served in under three seconds. However, if the project would see more
use, the server could easily be resized to accommodate the increase in
Fig. 2. The visualizations page of Argus. Here we can see the data selection panel (1), the timestamp slider (2), the actual visualizations (3), the
visualization selection menu (4), the local tool panel (5) and global tool panel (6).
traffic. We will return to the performance of our tool in the Discussion
and Limitations.
4.1.2 Node.js
For our web server, we chose Node.js, a JavaScript runtime, in com-
bination with Express, a web framework. This has the advantage that
we use JavaScript for both the front-end and the back-end, allowing
people to easily contribute to both. Since this is a common combination
with a large community, ample documentation can be found online.
The flexible and minimalist nature of Express results in only a small
amount of overhead and fast development times. In addition, Node.js
has excellent support for asynchronous programming, which is well
suited for our project (e.g. databases, HTTP, parsing files).
4.1.3 MySQL
For this project, storing the datasets uploaded by users was a vital
part of the functionality of the website. Data storage is convenient
because users do not have to re-upload their dataset and stimuli after
closing a viewing session. Storing the datasets in a database is also
convenient for sharing visualizations as it allows users to send a link of
their visualizations to other researchers, instead of sending the dataset
and the stimuli for them to upload to the website.
When choosing how to store data, there were two major options:
an SQL server or a NoSQL server. For this project, it was elected to
use MySQL to store the imported datasets, as the information that will
be processed has a precise and rigid structure. A NoSQL database
would not enforce this structure. Furthermore, there was a need to store
extra information about our datasets like a title, a description and a
timestamp of upload. Through the relational database we can link this
extra information to the datasets and the fixation points.
However, the disadvantage of using MySQL is slower read times
in comparison to a NoSQL. This is because the server does relatively
complex calculations to enforce the structure of its tables and to en-
force relations. The upside of enforcing structure and maintaining
relations outweighed the downside of having slightly slower read times.
Therefore, MySQL was chosen.
The MySQL server is hosted on the same machine as the Node.js
server. For security reasons, the server has external connections disal-
lowed; only local connections, like the Node.js web server, are allowed.
4.2 Front-end Architecture
The front-end of our website is what enables the interaction between
the user and the tool. It has been constructed with the aim of providing
a pleasant experience for researchers. How this was done will be
explained in the upcoming section.
4.2.1 General Front-end Structure
The front-end of our project is built using HTML, CSS, and JavaScript.
In particular, we used the D3.js library for visualizations and other
front-end components. The usage of Font Awesome provides the small
icons you see throughout the page. There are two primary reasons for
this choice in the design.
For one, there is no necessity for big frameworks or external libraries
for the front-end. The main purpose of our website is providing visual-
ization for the user’s datasets, and for that, only Javascript and D3.js
are sufficient. We do not want to distract the user with small, redundant,
or flamboyant features, and therefore have kept animation and dynamic
interactions to a minimum.
Secondly, we want our website to run as fast as possible. Adding
frameworks and libraries causes overhead for the site, slowing down
the rendering of the visualizations for the data, as an example. By
keeping the number of external imports to a minimum, the website will
not hamper the data analysis.
4.2.2 The “Data Selection” tab
After entering our site, the user will be directed to the data selection
page of Argus. Here, there are three option to get to analyzing the
dataset of interest.
1. There is the option to upload a new dataset. The user will need to
enter a name for the dataset and (optionally) a short description
for the dataset for future reference, along with the dataset itself.
After the dataset has been successfully uploaded, the user will
be redirected to the visualizations tab with the uploaded data
selected, from where the data analysis can begin.
2. Alternatively, it is possible to browse datasets that were previously
uploaded by you or others. Here you will see the name and
short description of all datasets, along with their IDs. The user
either can copy the ID or click on the ID to be redirected to the
visualizations tab.
3. You can also access a dataset when you know its unique ID. When
you use this option you will be redirected to the visualizations tab
and the correct data will be loaded.
4.2.3 The “Visualizations” tab
This tab is central to Argus (Fig. 2). Here, the user will be able to create
and alter insightful figures for their data analysis. The layout was made
with user-friendliness in mind. We will go over each component of this
tab.
To begin with, there is a panel in which the user can control which
dataset, stimulus, and users are going to be viewed in the visualizations
(1). Additionally, there is a timestamp slider (2) next to the data se-
lection panel. The user can filter the current dataset for specific time
intervals through the use of this widget.
Then, there are the visualizations themselves (3). In the top-left
corner of the visualization container, the user has the option to choose
which visualization they want to show up in the corresponding box (4).
When the visualization is rendered, the user is free to pan and zoom the
visualizations to their liking. Since all of our visualizations work with
vector graphics, the quality of the visualization will not be harmed by
a scaling operation apart from the background image, which is not an
SVG itself.
Within each visualization, the user has a handful of options (5).
The wrench opens a menu which can be used to alter features in the
Fig. 3. A heatmap generated from the Berlin metro map data used in [23].
The frequency of looking is represented by colors where by default neon
green means a high frequency. The more blue it gets, the lower the
frequency. No coloring at all indicates that there has not been looked at,
or not sufficiently long enough to be represented in the heatmap. When
hovering over the heatmap, the according density threshold is shown.
visualization, like the colors or the size of certain components. The
camera icon is used for saving a PNG-image of the currently displayed
visualization to the user’s local device. At last, the magnifier is there to
reset the zoom in the visualization, reverting it to its default zoom state.
Finally, at (6), there are some global features for the user. At the
left, there is a button to export the current visualization settings. This
provides the user with an URL that can be entered at any time in the
future to gain exactly the same visualization. To the right of that, is
a checkbox to toggle the linking of zooming and panning within a
visualization. This only works when the view mode is set to “split”,
which is part of the next feature we mention. The rightmost feature
enables the user to switch between “single” and “split” display. This
determines whether one or two visualizations will be shown on the
page.
4.3 Visualizations
We implemented five different visualization techniques, each providing
a different perspective of the data to the user.
4.3.1 Heatmap
This point-based visualization (see Fig. 3 for an example) provides an
overview of how much there has been looked at areas of the stimulus.
Our reason for implementing a heatmap [4] is because it gives an easy
overview for the researcher of the areas which were looked at the most
or longest.
Technically speaking, we see a computed density plot of where the
users have looked. The heatmap is made using D3.js with a contour
density function. The color is computed by taking into account the
fixation duration of each fixation and the specific place of the fixa-
tion. A contour density function will be applied and the according x,y
coordinates will be colored to their threshold.
For users convenience, certain options are implemented. The most
significant one is that users have the possibility to choose which band-
width to apply to the contour density function. A higher bandwidth
corresponds with bigger areas on the stimuli that are colored, while a
lower bandwidth makes these areas more compact. The default band-
width of the Gaussian kernel is 20. A tooltip provides the user with a
threshold on the density. A slider for the opacity of the density areas
has been implemented as well. The default setting 40, corresponds to
an alpha value of 0.4. A lower opacity makes it easier to see the under-
lying stimulus and distinguish contour shapes. In addition, users have
the possibility to change the primary and secondary heatmap colors.
Fig. 4. A scanpath visualization of the Tokyo metro map for one specific
user (p26) from [23]. This visualization allows us to carefully follow this
user’s eye movements. A larger radius corresponds to a longer fixation
duration. The numbers in the circles represent the order. Fixations 30
and 31 are selected.
This can allow researchers to choose colors that comply to a specific
stimulus or give a certain impression.
4.3.2 Scanpath Visualization
The scanpath visualization is a point-based visualization that can be
used to follow the ‘path’ a user takes while ‘scanning’ a stimulus. A
scanpath is defined as an alternating sequence of fixations (as defined
in Sect. 3.1) and saccades. In turn, saccades are swift eye movements
between two fixations [3]. While saccades are not explicitly included
in the dataset, they can be plotted by connecting successive fixations.
This means, however, that we cannot determine the saccade duration.
This visualization is implemented using D3.js and vector graphics
(SVG). The current stimulus is used as the background. Then, for each
user, the scanpath is rendered in a unique color (to easily distinguish
between different scanpaths). Fixations are drawn as circles; saccades
are drawn as lines between them. In each circle a number is drawn,
which corresponds to the order of the fixation, starting at 1. A longer
fixation duration corresponds to a larger fixation radius, at a decreasing
rate (i.e. a fixation with a duration that is twice as long corresponds to
a radius less than twice as large). Saccades, on the other hand, have a
fixed thickness, since the saccade duration is unknown. Each scanpath
is contained in a separate group element, which allows us to easily hide
or show a subset of users.
Multiple scanpaths could be shown at the same time. When hovering
over one, it changes color (transparency is removed completely) and is
raised to the foreground, so it can be clearly inspected. Additionally, a
tooltip of the current fixation is shown, which includes its user, times-
tamp, coordinates, and duration. If this fixation is clicked once it is
selected, staying highlighted even when the mouse moves to another. If
it is double-clicked, the data is filtered on the user of the fixation. This
means all scanpaths except the highlighted one are hidden. If a second
visualization is shown simultaneously, its data will also be filtered.
A common problem with scanpath visualizations is that they intro-
duce visual clutter [3]. It is therefore possible (as described above)
to select individual users. Moreover, the opacity of the scanpath can
be changed. When (partly) transparent, not only the stimulus but also
other fixations and saccades can be seen below the scanpath. Areas
where many fixations are stacked on top of each other can even be
identified since they will be darker than other areas. To further decrease
clutter, fixations circles can be turned off altogether; only the saccades
will be visible. You are further able to customize your plot by choosing
all colors.
Fig. 5. A transition diagram generated from the Bologna metro map data
used in [23]. The AOIs are represented by circles, The percentage of the
fixations in the cluster compared to the total fixations is also written in
the circle. Arrows represent the transitions between the AOIs.
4.3.3 AOI Transition diagram
Transition diagrams are based on areas of interest. In this project, a
clustering algorithm was used to create AOI’s. The chosen clustering
algorithm was k-means clustering since it is one of the fastest cluster-
ing algorithms. The k-means clustering algorithm is implemented in
keeping with the definition given by Iváncsy et al. [20]. At the start
of the algorithm, a number of random points is chosen determined by
the number of clusters the researcher wants. These points will be the
starting points of the clusters. The coordinates of these starting points
lie in the range of maximum and minimum x and y coordinates of the
fixation points. Thereafter, each fixation will be assigned based on
which cluster is the closest. When each fixation is assigned, a new
center of each cluster will be calculated by taking the mean of the x
and y coordinates of the fixations in the cluster. This process will be
repeated until the clusters do not change or 20 loops have been made.
This restriction is made to make sure that no endless loops will take
place. In our testing, it was almost always the case that clusters stopped
changing before 20 loops had taken place.
The transition diagram is used to show the size and position of areas
of interest and the transitions between the AOIs as can be seen in Fig. 5.
It was chosen to make the in context transition diagram. This means
the clusters are shown on the stimulus since this makes it easy for
researchers to see where the clusters are on the stimulus.
This visualization was made using an SVG and the D3.js library in
JavaScript. A cluster is represented by a circle whose size is dependent
on the percentage of total fixations in the cluster. A transition is rep-
resented by an arrow. The thickness of this arrow is dependent on the
percentage of outgoing transitions in the arrow (transitions that stay
within an AOI are not taken into account). For each pair of AOIs there
are two arrows one for each direction of the transition.
A downside of using the k-means clustering algorithm is its use of
random starting points. It could be that a starting point was chosen
where the cluster does not have any fixation that is closest to the cluster.
This would create an empty cluster. To remedy this a new random point
will be selected to replace the empty cluster. Another downside is that a
different clustering can be made with the same data. This is caused by
the random point selection that is used in k-means clustering, therefore
this problem can not be solved without changing the clustering algo-
rithm. Transition diagrams do not show which fixations are assigned to
which cluster. Therefore an option is added to see all fixations colored
in the color of the cluster. The standard clustering does not take the
fixation duration into account. However, an option is added to take
fixation duration into account when clustering.
Fig. 6. An example Eye Cloud of the Tokyo subway map looked at by a
large number of people [23].
4.3.4 Eye Clouds
The eye clouds visualization is based on the tool developed by Burch et
al. regarding attention clouds [12]. It displays snapshots of the fixation
points in the data set in a circle with a radius relative to the fixation
duration.
The centers of the eyes (small circles) are the (x,y)-coordinates of the
fixation points provided by the dataset. The radius of each individual
eye is determined by a mapping from the fixation duration values to a
set of values that makes the circles reasonably large to investigate them.
In our case, this is realized using the built-in function SCALESQRT,
provided by D3.js.
The eyes are small SVG figures. The way they are held together is
through the use of the force simulation system of D3.js. This system
has been programmed in such that all eyes will be attracted towards
the center of the container. So, even when a circle is dragged away
from the eye cloud, which was implemented for small customizations
to the overall composition to the user’s liking, the eye will always try to
return to the center. However, the eyes are programmed so that they will
never overlap with each other, the center will only be occupied by one
(arbitrary) eye, whilst all the others are going to align around this eye.
For clarity’s sake, each eye also has a small force-field around itself,
making it so that the eyes have a small amount of space in between one
another.
When the user hovers over a single circle, the circle will receive a
border in a distinct color which the user can alter, but its default is red.
Moreover, a circle can be clicked which toggles a, by default, green
border as selection indicator.
Eye clouds are great for getting a general overview of what areas of
the map were looked at most. The downside of Eye clouds visualization,
however, is that they do not display the location of the snapshot in
the eye on the whole stimulus. This makes it hard to analyze the
results without a second visualization next to it and using the selection
interaction technique to highlight the areas in the snapshots.
4.3.5 Gaze Stripes
The gaze stripes visualization (as seen in Fig. 7) is a point-based vi-
sualization which shows the selected stimulus in combination with a
timeline. In the gaze stripes, on the x-axis, the time aspect can be seen
and on the y-axis the specific users. For a certain time-interval, the
area around the fixation point which has been looked at the most in this
interval will be copied and turned into a thumbnail. This visualization
Fig. 7. An example of the gaze stripes applied on a map of Tokyo’s metro
system [23]. Selected fixations are represented by a black box around
the thumbnail.
gives insight into how fixations are related to their point in time.
The gaze stripes visualization is implemented using D3.js and vector
graphics (SVG). For every stimulus, we find the maximal time tmax
over all users. Now we define the time interval for this stimulus as
the maximal time divided by the number of images we would like. So
∆t := tmax
n . Next, we determine for each time interval [ti−1,ti] (where
i = 1,...,n and ti = i·∆t) which fixation point has been looked at the
most. This point will be cropped as a rectangle around that point with
a user-specified size and put in the specific user lane.
For a better overview, the gaze stripes can be shown together with
e.g. the scanpath visualization. Then when selecting a thumbnail, the
related fixation in the scanpath visualization will be indicated by a
black circle.
A drawback is that the thumbnails must be very small in order to
see the whole time-overview. Zooming might remedy seeing small
pictures, but then the overview is lost. Also, changing the radius r of the
cropping might help to see pictures more/less in detail. One must keep
in mind that we only show the fixation which has been looked at the
longest in a certain time interval. Therefore, it can be the case that some
fixation points will be neglected. The option of choosing the amount of
pictures you want can resolve this to some extent. A tooltip makes sure
you can see how long a participant stares at a fixation point. What is
more, our gaze stripe does not show when the participant started and
stopped as the participant could have started looking in the time interval
before and after the interval in which his gaze is shown. Unfortunately,
this problem cannot be remedied with our current implementation.
4.4 Interaction Techniques
This section will give a brief overview and implementation example of
the seven interactions techniques presented by Yi et al. [28].
• The select interaction technique is used to highlight certain points
of data that are of interested to a researcher. This can be useful
to keep track of data when changing settings or switching to a
different visualization. Every representation of a fixation can be
clicked after which it will get a border in all visualizations. This
can be seen in Fig. 7.
• The explore interaction technique is used to view different parts of
a visualization and see new parts of the data without changing the
selected data. We have implemented panning such that researchers
can drag and drop the visualization to see different parts of the
visualization.
0 2 4 6 8 10 12 14 16 18
number of datasets
0
500
1000
1500
2000
2500
3000
time
(ms)
(a)
parse
load
get
211 212 213 214 215 216 217 218
number of records
21
23
25
27
29
211
213
(b)
Database performance
Fig. 8. (a) The time it takes to perform database tasks depending on how
many datasets have already been uploaded, each containing roughly
120k records. Loading data into the database takes the most time,
especially if it is empty. Retrieving data from the database is trivial
compared to the other tasks. In short, the size of the database (at
levels we can expect) does not seem to impact the performance. (b)
The time it takes to perform database tasks depending on how large the
uploaded dataset is. The figure is scaled logarithmically to better spread
out measurements. There seems to be a linear relation between the
number of records and the time it takes to perform database operations.
• The reconfigure interaction technique makes it possible to rear-
range different aspects of a visualization. This makes it possible
for researchers to find a new perspective on their data by altering
how it is presented to them. A good example of how this can be
used is seen in the eye clouds visualization. In this visualization, it
is possible to rearrange the individual eye clouds to find different
patterns in the data.
• The encode interaction technique allows users to customize the
visualizations by changing how the visualization is displayed.
This includes but is not limited to: colors and sizes of elements in a
visualization. For example, in the transition diagram, it is possible
to change the colors of different elements in the visualization.
By changing the colors of one type of element it is possible to
distinguish these elements.
• The abstraction/elaborate interaction technique makes it possible
to change the level of detail presented. This is needed to change
from an overview of all the data to a specific point in the data. One
option to abstract/elaborate is to zoom in/out. This is implemented
in all our visualizations.
• The filter interaction technique allows researchers to select dif-
ferent parts of the data to visualize. Researchers can filter out
unwanted data, this data is not removed but will not be used to
generate the visualization. It is possible to filter users and the
timestamps of fixations.
• The connect interaction technique can be split into two parts. The
related data part and the hidden data part.
The related data part allows researchers to see connections be-
tween data points. An example can be seen in the scanpath
visualization where hovering over a fixation will also highlight
all fixations of the same user.
The hidden data part gives researchers the possibility to see data
that is not shown normally in the visualization but might be of
use. For example, in the transition diagram, it is possible to see
all fixations colored in the color of the AOI they are part of.
5 APPLICATION EXAMPLE: TOKYO
In this section, eye tracking data will be used and analyzed in order
to give an example as to how you can gather useful insights by using
Argus.
The metro map we will look at is Tokyo’s (09 Tokyo S1). On this
map, the starting place is indicated by a green cursor and the destination
Fig. 9. From left to right, the scanpath, the heatmap and the AOI transition
diagram of Tokyo’s subway map.
is marked by a large red-white target. Keep in mind, however, that this
analysis is done using a specific set of eye tracking data provided to us
by other researchers.
Looking at the scanpath visualization for all users combined (Fig. 9),
it immediately jumps out that there is a very dense line of paths on the
straight line from start to finish. A lot of people have looked at the
space between their point of interest and found their route. However,
there is also a great deal of paths going up and left from the start and
even some to the right, under the aforementioned dense line.
Continuing with the AOI transition diagram (Fig. 9), we observe yet
again that the two biggest clusters lie at the start and end of the track.
Here, however, the cluster underneath the destination is indicated as the
next largest cluster. This differs from what we observed in the scanpath,
where it looked like the line in between start and end was more densely
looked at, followed by the area to the left of the destination. More
research will be needed for the final conclusion.
Next, we look at the heatmap visualization (Fig. 9). It appears to be
a consistent result that the start and destination get the most attention
because, again, both appear with high intensity. In this visualization,
the area to the left appears to be more intensely fixated at than the space
in between and to the right of the start and destination. This result
is consistent with the results of the scanpath visualization. In case
the conflicting results with the AOI transition diagram leave to doubt,
however, more visualizations can be used for further analysis. Moving
on, the gaze stipes (Fig. 7) and eye clouds (Fig. 6), we conclude one
final time that the most attention is given to start and target. Apart from
that, there appear to be a lot of snapshots in the gaze stripes and a rather
high number of large eyes, in which the Shinjuku-sanchome line is
displayed. This is again the area to the left of the target and above the
start.
6 DISCUSSION AND LIMITATIONS
In this section, we will explain and reflect on some of the choices we
made while designing our tool, as well as outline its limitations and list
some improvements for future work.
6.1 Filtering
While we do provide some filtering options, our tool lacks others.
For instance, although our tool supports showing either one user or
all simultaneously, it is not possible to filter users individually (i.e.
allowing any combination). Of course, it is possible to open two
instances of our tool, showing one user each, but this is not a real fix. A
much more flexible solution would be to have a list of users that could
be checked or unchecked individually.
Similarly, while it is possible to filter the data based on a timestamp
interval, our tool cannot filter on fixation duration. Such an option
would have been interesting, since it would have been possible to filter
out insignificant fixations, reducing clutter for the scanpath and eye
clouds. In addition, by looking at the fixations with the longest duration,
areas of interest could be more easily identified.
6.2 Clustering
We implemented only one way of data clustering: a k-means clustering
algorithm (see Sect. 4.3.3). Often, AOIs are based on semantic informa-
tion [3]. In our case, however, this is not possible. While the number
of clusters can be specified by the user, we do not provide many other
options regarding clustering. Since the result of the algorithm depends
on the initial position of the clusters (which is random) [20], running
the algorithm again on the same dataset will not guarantee to produce
identical results. Moreover, k-means clustering only works well for
spherical data and is not well suited to noise [20], both of which may
not be true for our data. The flexibility of our tool could be improved
by allowing users to specify AOIs themselves, or letting them choose
between different clustering algorithms.
6.3 User Interface
For our user interface, we chose not to use any existing frameworks
like Bootstrap or React. We opted instead to implement everything
ourselves from scratch, using plain HTML and CSS. This has the
benefit of no overhead, meaning improved performance. On the other
hand, development time is increased and additional bugs could be
introduced since we have to implement every UI feature ourselves. In
the end, though, we are happy with our choice, since it offered us a lot
of flexibility and made our tool quick and therefore easy to use.
We do worry, however, that our design will not always be intuitive
for users. For example, due to technical reasons, it is not possible to
open the same type of visualization twice; for users, this may not be
clear. Furthermore, there are many options, some of which are hidden
behind menus. While they are all accompanied by a label or tooltip,
their function may still not be immediately obvious. We think that
using a built-in tutorial, one that guides the users step-by-step through
the options, would have improved the usability of our tool.
6.4 Database Performance
It only takes a few hundred milliseconds for the visualizations to be
shown after selecting the data (dataset, stimulus, user), if not less.
As a matter of fact, the most time-consuming process of our tool is
data parsing. That is why we have analyzed its performance. For
this we used the metro maps dataset from Netzel et al. [23] and our
production server (see Sect. 4.1.1). Note that these tests do not include
the unzipping and copying of images. We ran each test two times and
took the average to reduce the effect of fluctuations. On average, the
three operations combined only take a very respectable three seconds,
excluding HTTP requests (see Fig. 8a). While performance could have
been improved using a NoSQL database (at the cost of less structure),
our current solution is certainly faster than storing plain CSV files. This
is especially true since our tables are indexed, meaning retrieval is sped
up at the cost of load time. This trade-off can clearly be seen in Fig. 8a.
The same figure shows that the performance of the database does not
depend on how ‘full’ it is. Fig. 8b shows that database operations take
longer, the larger the uploaded dataset is.
However, the time it takes to perform tasks heavily fluctuates. Indeed,
we have had instances where ‘load’ took a staggering 10 seconds. We
think there are several reasons for this. Firstly, our website is hosted
on a shared system, which means that temporary network surges for
other servers on the same system could impact the performance of our
website and database. Secondly, the fact that our database is a (local)
server itself can also contribute to the fluctuations. This is a trade-off
between network overhead and ease of development.
6.5 Security and Data Validation
Since we are using an SQL database, our tool is vulnerable to SQL
injection attacks. While we did try to prevent these types of attacks by
sanitizing the input, we did not focus on (database) security for this
project (e.g. we did not consider second-order injections), since our
tool does not handle sensitive user data. Cross-site scripting (XSS) is
another potential vulnerability that we did not look into.
Another area our tool can be improved in is input data validation.
Currently, as described in Sect. 3.3, there is only very limited check-
ing with regard to the data format. For example, both the order of
columns and the column data types (e.g. numerical or text-based) are
not checked, but just assumed correct. Also, no sanity checks (e.g.
whether coordinates or timestamps are realistic) or stimulus checks
(i.e. whether the specified stimuli actually exist) are performed. Even
though our upload tool will still reject most (seriously) malformed files,
it does not provide clear feedback to the user. Moreover, when an
uploaded dataset is rejected due to its size, the user is also not clearly
informed. In this regard, user experience can be improved.
7 CONCLUSION
In this paper, Argus, a fast, user-friendly visualization tool for eye track-
ing data, was presented. We discussed the data model of the dataset and
how the data was parsed by, among other things, removing the diacritics
and normalizing the timestamp. Furthermore, this report looked at how
the back-end and front-end work together to intuitively and efficiently
guide researchers through the process of generating visualizations for
their data. Additionally, we closely analyzed the five implemented vi-
sualizations, namely the heatmap, scanpath visualization, gaze stripes,
eye clouds and AOI transition diagram by looking at the goal of each
visualization, its implementation and the drawbacks. We also reviewed
how the seven interaction were successfully integrated in this tool. On
top of that, we showed our tool in action with an application example
to show the abilities of Argus. Finally, we considered the limitations
of Argus and what can be done to improve the tool. Future research
should consider additional forms of filtering users and other clustering
algorithms, to name a few.
ACKNOWLEDGMENTS
The authors wish to thank a handful of people who contributed to the
success of this project either directly or indirectly. First and foremost,
we would like to thank Catalin Ionescu for his excellent tutoring. With
his guidance, we were able to learn how to cooperate in a new and
challenging environment.
Secondly, we wish to express our appreciation to the staff of this
course who made our project possible, namely Elisabeth Melby and
Michael Burch. The feedback given by Elisabeth Melby on the cooper-
ation through Scrum and the feedback given by Michael Burch on the
interim paper gave the authors a chance to grow and improve.
REFERENCES
[1] F. Beck, M. Burch, and S. Diehl. Matching application requirements with
dynamic graph visualization profiles. In Proceedings of 17th International
Conference on Information Visualisation, IV, pp. 11–18. IEEE Computer
Society, 2013.
[2] F. Beck, M. Burch, T. Munz, L. D. Silvestro, and D. Weiskopf. General-
ized pythagoras trees: A fractal approach to hierarchy visualization. In
Proceedings of International Conference on Computer Vision, Imaging
and Computer Graphics - Theory and Applications - International Joint
Conference, VISIGRAPP, vol. 550 of Communications in Computer and
Information Science, pp. 115–135. Springer, 2014.
[3] T. Blascheck, K. Kurzhals, M. Raschke, M. Burch, D. Weiskopf, and
T. Ertl. Visualization of eye tracking data: A taxonomy and survey.
Comput. Graph. Forum, 36(8):260–284, 2017. doi: 10.1111/cgf.13079
[4] A. Bojko. Informative or misleading? heatmaps deconstructed. In J. A.
Jacko, ed., Human-Computer Interaction. New Trends, 13th International
Conference, HCI International 2009, San Diego, CA, USA, July 19-24,
2009, Proceedings, Part I, vol. 5610 of Lecture Notes in Computer Science,
pp. 30–39. Springer, 2009. doi: 10.1007/978-3-642-02574-7 4
[5] M. Bostock, V. Ogievetsky, and J. Heer. D3 data-driven documents. IEEE
Trans. Vis. Comput. Graph., 17(12):2301–2309, 2011. doi: 10.1109/TVCG
.2011.185
[6] M. Burch. Time-preserving visual attention maps. Intelligent Deci-
sion Technologies 2016 Smart Innovation, Systems and Technologies,
p. 273–283, 2016. doi: 10.1007/978-3-319-39627-9 24
[7] M. Burch, M. Hlawatsch, and D. Weiskopf. Visualizing a sequence of a
thousand graphs (or even more). Computer Graphics Forum, 36(3):261–
271, 2017.
[8] M. Burch, M. Höferlin, and D. Weiskopf. Layered TimeRadarTrees. In
Proceedings of 15th International Conference on Information Visualisa-
tion, IV, pp. 18–25. IEEE Computer Society, 2011.
[9] M. Burch, S. Lohmann, F. Beck, N. Rodriguez, L. D. Silvestro, and
D. Weiskopf. Radcloud: Visualizing multiple texts with merged word
clouds. In Proceedings of 18th International Conference on Information
Visualisation, IV, pp. 108–113. IEEE Computer Society, 2014.
[10] M. Burch, C. Müller, G. Reina, H. Schmauder, M. Greis, and D. Weiskopf.
Visualizing dynamic call graphs. In Proceedings of the Vision, Modeling,
and Visualization Workshop 2012, pp. 207–214. Eurographics Association,
2012.
[11] M. Burch and N. Timmermans. Sankeye: A visualization technique for
AOI transitions. In A. Bulling, A. Huckauf, E. Jain, R. Radach, and
D. Weiskopf, eds., ETRA ’20: 2020 Symposium on Eye Tracking Research
and Applications, Short Papers, Stuttgart, Germany, June 2-5, 2020, pp.
48:1–48:5. ACM, 2020. doi: 10.1145/3379156.3391833
[12] M. Burch, A. Veneri, and B. Sun. Eyeclouds: A visualization and analysis
tool for exploring eye movement data. In Proceedings of the 12th Interna-
tional Symposium on Visual Information Communication and Interaction,
VINCI 2019, Shanghai, China, September 20-22, 2019, pp. 8:1–8:8. ACM,
2019. doi: 10.1145/3356422.3356423
[13] M. Burch and D. Weiskopf. A flip-book of edge-splatted small multiples
for visualizing dynamic graphs. In Proceedings of the 7th International
Symposium on Visual Information Communication and Interaction, VINCI,
p. 29. ACM, 2014.
[14] M. Burch and D. Weiskopf. On the benefits and drawbacks of radial
diagrams. In W. Huang, ed., Handbook of Human Centric Visualization,
pp. 429–451. Springer, 2014.
[15] N. Charness, E. M. Reingold, M. Pomplun, and D. M. Stampe. The
perceptual aspect of skilled performance in chess: Evidence from eye
movements. Memory Cognition, 29(8):1146–1152, 2001. doi: 10.3758/
bf03206384
[16] T. Fawcett. The eyes have it: Eye tracking data visualizations of viewing
patterns of statistical graphics. All Graduate Plan B and other Reports,
787:1–6, May 2016.
[17] R. S. Hessels, C. Kemner, C. Boomen, and I. T. Hooge. The area-of-
interest problem in eyetracking research: A noise-robust solution for face
and sparse stimuli. Behavior Research Methods, 48(4), Dec 2016. doi: 10.
3758/s13428-015-0676-y
[18] K. Holmqvist. Eye tracking: a comprehensive guide to methods and
measures. Oxford University Press, 2011.
[19] K. Holmqvist, J. Holsanova, M. Barthelson, and D. Lundqvist. Reading
or scanning? A study of newspaper and net paper reading., pp. 657–670.
Elsevier, United States, 2003. In cooperation with Humanistlaboratoriet,
Lund university.
[20] R. Iváncsy, A. Babos, and C. Legány. Analysis and extensions of popular
clustering algorithms. 01 2005.
[21] K. Kurzhals, M. Hlawatsch, F. Heimerl, M. Burch, T. Ertl, and D. Weiskopf.
Gaze stripes: Image-based visualization of eye tracking data. IEEE Trans.
Vis. Comput. Graph., 22(1):1005–1014, 2016. doi: 10.1109/TVCG.2015.
2468091
[22] K. Kurzhals and D. Weiskopf. AOI transition trees. In H. R. Zhang and
T. Tang, eds., Proceedings of the 41st Graphics Interface Conference,
Halifax, NS, Canada, June 3-5, 2015, pp. 41–48. ACM, 2015.
[23] R. Netzel, B. Ohlhausen, K. Kurzhals, R. Woods, M. Burch, and
D. Weiskopf. User performance and reading strategies for metro maps:
An eye tracking study. Spatial Cognition & Computation, 17(1-2):39–64,
2017. doi: 10.1080/13875868.2016.1226839
[24] J. C. Roberts. State of the art: Coordinated multiple views in exploratory
visualization. Fifth International Conference on Coordinated and Multiple
Views in Exploratory Visualization (CMV 2007), 2007. doi: 10.1109/cmv.
2007.20
[25] L. F. Scinto, R. Pillalamarri, and R. Karsh. Cognitive strategies for visual
search. Acta Psychologica, 62(3):263–292, 1986. doi: 10.1016/0001-6918
(86)90091-0
[26] O. Spakov and D. Miniotas. Visualization of eye gaze data using heat
maps. Eletronics and Electrical Engineering, 2(74):55–58, 2007.
[27] C. Vehlow, M. Burch, H. Schmauder, and D. Weiskopf. Radial layered ma-
trix visualization of dynamic graphs. In Proceedings of 17th International
Conference on Information Visualisation, IV, pp. 51–58. IEEE Computer
Society, 2013.
[28] J. S. Yi, Y. ah Kang, J. T. Stasko, and J. A. Jacko. Toward a deeper
understanding of the role of interaction in information visualization. IEEE
Trans. Vis. Comput. Graph., 13(6):1224–1231, 2007. doi: 10.1109/TVCG.
2007.70515

More Related Content

Similar to Interactively Linked Eye Tracking Visualizations

Robust Clustering of Eye Movement Recordings for Quanti
Robust Clustering of Eye Movement Recordings for QuantiRobust Clustering of Eye Movement Recordings for Quanti
Robust Clustering of Eye Movement Recordings for QuantiGiuseppe Fineschi
 
Feature analysis of ontology visualization methods and tools
Feature analysis of ontology visualization methods and toolsFeature analysis of ontology visualization methods and tools
Feature analysis of ontology visualization methods and toolsCSITiaesprime
 
Proposed Multi-object Tracking Algorithm Using Sobel Edge Detection operator
Proposed Multi-object Tracking Algorithm Using Sobel Edge Detection operatorProposed Multi-object Tracking Algorithm Using Sobel Edge Detection operator
Proposed Multi-object Tracking Algorithm Using Sobel Edge Detection operatorQUESTJOURNAL
 
Identifying Fixations and Saccades in Eye-Tracking Protocols
Identifying Fixations and Saccades in Eye-Tracking ProtocolsIdentifying Fixations and Saccades in Eye-Tracking Protocols
Identifying Fixations and Saccades in Eye-Tracking ProtocolsGiuseppe Fineschi
 
CHARACTERIZING HUMAN BEHAVIOURS USING STATISTICAL MOTION DESCRIPTOR
CHARACTERIZING HUMAN BEHAVIOURS USING STATISTICAL MOTION DESCRIPTORCHARACTERIZING HUMAN BEHAVIOURS USING STATISTICAL MOTION DESCRIPTOR
CHARACTERIZING HUMAN BEHAVIOURS USING STATISTICAL MOTION DESCRIPTORsipij
 
Personality Traits and Visualization Survey by Christy Case
Personality Traits and Visualization Survey by Christy CasePersonality Traits and Visualization Survey by Christy Case
Personality Traits and Visualization Survey by Christy CaseChristy C Langdon
 
Analyzing And Predicting Focus Of Attention In Remote Collaborative Tasks
Analyzing And Predicting Focus Of Attention In Remote Collaborative TasksAnalyzing And Predicting Focus Of Attention In Remote Collaborative Tasks
Analyzing And Predicting Focus Of Attention In Remote Collaborative TasksKayla Smith
 
Image Retrieval using Graph based Visual Saliency
Image Retrieval using Graph based Visual SaliencyImage Retrieval using Graph based Visual Saliency
Image Retrieval using Graph based Visual SaliencyIRJET Journal
 
Graph-based analysis of resource dependencies in project networks
Graph-based analysis of resource dependencies in project networksGraph-based analysis of resource dependencies in project networks
Graph-based analysis of resource dependencies in project networksGurdal Ertek
 
SEMANTIC VISUALIZATION AND NAVIGATION IN TEXTUAL CORPUS
SEMANTIC VISUALIZATION AND NAVIGATION IN TEXTUAL CORPUSSEMANTIC VISUALIZATION AND NAVIGATION IN TEXTUAL CORPUS
SEMANTIC VISUALIZATION AND NAVIGATION IN TEXTUAL CORPUSijistjournal
 
SEMANTIC VISUALIZATION AND NAVIGATION IN TEXTUAL CORPUS
SEMANTIC VISUALIZATION AND NAVIGATION IN TEXTUAL CORPUSSEMANTIC VISUALIZATION AND NAVIGATION IN TEXTUAL CORPUS
SEMANTIC VISUALIZATION AND NAVIGATION IN TEXTUAL CORPUSijistjournal
 
A Study on Data Visualization Techniques of Spatio Temporal Data
A Study on Data Visualization Techniques of Spatio Temporal DataA Study on Data Visualization Techniques of Spatio Temporal Data
A Study on Data Visualization Techniques of Spatio Temporal DataIJMTST Journal
 
Detection and Tracking of Objects: A Detailed Study
Detection and Tracking of Objects: A Detailed StudyDetection and Tracking of Objects: A Detailed Study
Detection and Tracking of Objects: A Detailed StudyIJEACS
 
Assistive System Using Eye Gaze Estimation for Amyotrophic Lateral Sclerosis ...
Assistive System Using Eye Gaze Estimation for Amyotrophic Lateral Sclerosis ...Assistive System Using Eye Gaze Estimation for Amyotrophic Lateral Sclerosis ...
Assistive System Using Eye Gaze Estimation for Amyotrophic Lateral Sclerosis ...Editor IJCATR
 
XJTLU_Conference(6)
XJTLU_Conference(6)XJTLU_Conference(6)
XJTLU_Conference(6)Sam Cox
 
Video Data Visualization System : Semantic Classification and Personalization
Video Data Visualization System : Semantic Classification and Personalization  Video Data Visualization System : Semantic Classification and Personalization
Video Data Visualization System : Semantic Classification and Personalization ijcga
 
Video Data Visualization System : Semantic Classification and Personalization
Video Data Visualization System : Semantic Classification and Personalization  Video Data Visualization System : Semantic Classification and Personalization
Video Data Visualization System : Semantic Classification and Personalization ijcga
 
A Framework for Automated Association Mining Over Multiple Databases
A Framework for Automated Association Mining Over Multiple DatabasesA Framework for Automated Association Mining Over Multiple Databases
A Framework for Automated Association Mining Over Multiple DatabasesGurdal Ertek
 

Similar to Interactively Linked Eye Tracking Visualizations (20)

Robust Clustering of Eye Movement Recordings for Quanti
Robust Clustering of Eye Movement Recordings for QuantiRobust Clustering of Eye Movement Recordings for Quanti
Robust Clustering of Eye Movement Recordings for Quanti
 
Feature analysis of ontology visualization methods and tools
Feature analysis of ontology visualization methods and toolsFeature analysis of ontology visualization methods and tools
Feature analysis of ontology visualization methods and tools
 
IJET-V2I6P21
IJET-V2I6P21IJET-V2I6P21
IJET-V2I6P21
 
Proposed Multi-object Tracking Algorithm Using Sobel Edge Detection operator
Proposed Multi-object Tracking Algorithm Using Sobel Edge Detection operatorProposed Multi-object Tracking Algorithm Using Sobel Edge Detection operator
Proposed Multi-object Tracking Algorithm Using Sobel Edge Detection operator
 
Identifying Fixations and Saccades in Eye-Tracking Protocols
Identifying Fixations and Saccades in Eye-Tracking ProtocolsIdentifying Fixations and Saccades in Eye-Tracking Protocols
Identifying Fixations and Saccades in Eye-Tracking Protocols
 
CHARACTERIZING HUMAN BEHAVIOURS USING STATISTICAL MOTION DESCRIPTOR
CHARACTERIZING HUMAN BEHAVIOURS USING STATISTICAL MOTION DESCRIPTORCHARACTERIZING HUMAN BEHAVIOURS USING STATISTICAL MOTION DESCRIPTOR
CHARACTERIZING HUMAN BEHAVIOURS USING STATISTICAL MOTION DESCRIPTOR
 
Personality Traits and Visualization Survey by Christy Case
Personality Traits and Visualization Survey by Christy CasePersonality Traits and Visualization Survey by Christy Case
Personality Traits and Visualization Survey by Christy Case
 
Analyzing And Predicting Focus Of Attention In Remote Collaborative Tasks
Analyzing And Predicting Focus Of Attention In Remote Collaborative TasksAnalyzing And Predicting Focus Of Attention In Remote Collaborative Tasks
Analyzing And Predicting Focus Of Attention In Remote Collaborative Tasks
 
Image Retrieval using Graph based Visual Saliency
Image Retrieval using Graph based Visual SaliencyImage Retrieval using Graph based Visual Saliency
Image Retrieval using Graph based Visual Saliency
 
Graph-based analysis of resource dependencies in project networks
Graph-based analysis of resource dependencies in project networksGraph-based analysis of resource dependencies in project networks
Graph-based analysis of resource dependencies in project networks
 
icmi2233-bixler
icmi2233-bixlericmi2233-bixler
icmi2233-bixler
 
SEMANTIC VISUALIZATION AND NAVIGATION IN TEXTUAL CORPUS
SEMANTIC VISUALIZATION AND NAVIGATION IN TEXTUAL CORPUSSEMANTIC VISUALIZATION AND NAVIGATION IN TEXTUAL CORPUS
SEMANTIC VISUALIZATION AND NAVIGATION IN TEXTUAL CORPUS
 
SEMANTIC VISUALIZATION AND NAVIGATION IN TEXTUAL CORPUS
SEMANTIC VISUALIZATION AND NAVIGATION IN TEXTUAL CORPUSSEMANTIC VISUALIZATION AND NAVIGATION IN TEXTUAL CORPUS
SEMANTIC VISUALIZATION AND NAVIGATION IN TEXTUAL CORPUS
 
A Study on Data Visualization Techniques of Spatio Temporal Data
A Study on Data Visualization Techniques of Spatio Temporal DataA Study on Data Visualization Techniques of Spatio Temporal Data
A Study on Data Visualization Techniques of Spatio Temporal Data
 
Detection and Tracking of Objects: A Detailed Study
Detection and Tracking of Objects: A Detailed StudyDetection and Tracking of Objects: A Detailed Study
Detection and Tracking of Objects: A Detailed Study
 
Assistive System Using Eye Gaze Estimation for Amyotrophic Lateral Sclerosis ...
Assistive System Using Eye Gaze Estimation for Amyotrophic Lateral Sclerosis ...Assistive System Using Eye Gaze Estimation for Amyotrophic Lateral Sclerosis ...
Assistive System Using Eye Gaze Estimation for Amyotrophic Lateral Sclerosis ...
 
XJTLU_Conference(6)
XJTLU_Conference(6)XJTLU_Conference(6)
XJTLU_Conference(6)
 
Video Data Visualization System : Semantic Classification and Personalization
Video Data Visualization System : Semantic Classification and Personalization  Video Data Visualization System : Semantic Classification and Personalization
Video Data Visualization System : Semantic Classification and Personalization
 
Video Data Visualization System : Semantic Classification and Personalization
Video Data Visualization System : Semantic Classification and Personalization  Video Data Visualization System : Semantic Classification and Personalization
Video Data Visualization System : Semantic Classification and Personalization
 
A Framework for Automated Association Mining Over Multiple Databases
A Framework for Automated Association Mining Over Multiple DatabasesA Framework for Automated Association Mining Over Multiple Databases
A Framework for Automated Association Mining Over Multiple Databases
 

Recently uploaded

VIP High Profile Call Girls Amravati Aarushi 8250192130 Independent Escort Se...
VIP High Profile Call Girls Amravati Aarushi 8250192130 Independent Escort Se...VIP High Profile Call Girls Amravati Aarushi 8250192130 Independent Escort Se...
VIP High Profile Call Girls Amravati Aarushi 8250192130 Independent Escort Se...Suhani Kapoor
 
꧁❤ Greater Noida Call Girls Delhi ❤꧂ 9711199171 ☎️ Hard And Sexy Vip Call
꧁❤ Greater Noida Call Girls Delhi ❤꧂ 9711199171 ☎️ Hard And Sexy Vip Call꧁❤ Greater Noida Call Girls Delhi ❤꧂ 9711199171 ☎️ Hard And Sexy Vip Call
꧁❤ Greater Noida Call Girls Delhi ❤꧂ 9711199171 ☎️ Hard And Sexy Vip Callshivangimorya083
 
Call Girls in Defence Colony Delhi 💯Call Us 🔝8264348440🔝
Call Girls in Defence Colony Delhi 💯Call Us 🔝8264348440🔝Call Girls in Defence Colony Delhi 💯Call Us 🔝8264348440🔝
Call Girls in Defence Colony Delhi 💯Call Us 🔝8264348440🔝soniya singh
 
Data Science Jobs and Salaries Analysis.pptx
Data Science Jobs and Salaries Analysis.pptxData Science Jobs and Salaries Analysis.pptx
Data Science Jobs and Salaries Analysis.pptxFurkanTasci3
 
Data Warehouse , Data Cube Computation
Data Warehouse   , Data Cube ComputationData Warehouse   , Data Cube Computation
Data Warehouse , Data Cube Computationsit20ad004
 
High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...
High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...
High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...soniya singh
 
Beautiful Sapna Vip Call Girls Hauz Khas 9711199012 Call /Whatsapps
Beautiful Sapna Vip  Call Girls Hauz Khas 9711199012 Call /WhatsappsBeautiful Sapna Vip  Call Girls Hauz Khas 9711199012 Call /Whatsapps
Beautiful Sapna Vip Call Girls Hauz Khas 9711199012 Call /Whatsappssapnasaifi408
 
20240419 - Measurecamp Amsterdam - SAM.pdf
20240419 - Measurecamp Amsterdam - SAM.pdf20240419 - Measurecamp Amsterdam - SAM.pdf
20240419 - Measurecamp Amsterdam - SAM.pdfHuman37
 
Industrialised data - the key to AI success.pdf
Industrialised data - the key to AI success.pdfIndustrialised data - the key to AI success.pdf
Industrialised data - the key to AI success.pdfLars Albertsson
 
B2 Creative Industry Response Evaluation.docx
B2 Creative Industry Response Evaluation.docxB2 Creative Industry Response Evaluation.docx
B2 Creative Industry Response Evaluation.docxStephen266013
 
RadioAdProWritingCinderellabyButleri.pdf
RadioAdProWritingCinderellabyButleri.pdfRadioAdProWritingCinderellabyButleri.pdf
RadioAdProWritingCinderellabyButleri.pdfgstagge
 
VIP Call Girls in Amravati Aarohi 8250192130 Independent Escort Service Amravati
VIP Call Girls in Amravati Aarohi 8250192130 Independent Escort Service AmravatiVIP Call Girls in Amravati Aarohi 8250192130 Independent Escort Service Amravati
VIP Call Girls in Amravati Aarohi 8250192130 Independent Escort Service AmravatiSuhani Kapoor
 
From idea to production in a day – Leveraging Azure ML and Streamlit to build...
From idea to production in a day – Leveraging Azure ML and Streamlit to build...From idea to production in a day – Leveraging Azure ML and Streamlit to build...
From idea to production in a day – Leveraging Azure ML and Streamlit to build...Florian Roscheck
 
PKS-TGC-1084-630 - Stage 1 Proposal.pptx
PKS-TGC-1084-630 - Stage 1 Proposal.pptxPKS-TGC-1084-630 - Stage 1 Proposal.pptx
PKS-TGC-1084-630 - Stage 1 Proposal.pptxPramod Kumar Srivastava
 
VIP High Class Call Girls Bikaner Anushka 8250192130 Independent Escort Servi...
VIP High Class Call Girls Bikaner Anushka 8250192130 Independent Escort Servi...VIP High Class Call Girls Bikaner Anushka 8250192130 Independent Escort Servi...
VIP High Class Call Girls Bikaner Anushka 8250192130 Independent Escort Servi...Suhani Kapoor
 
VIP Call Girls Service Miyapur Hyderabad Call +91-8250192130
VIP Call Girls Service Miyapur Hyderabad Call +91-8250192130VIP Call Girls Service Miyapur Hyderabad Call +91-8250192130
VIP Call Girls Service Miyapur Hyderabad Call +91-8250192130Suhani Kapoor
 
Kantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdf
Kantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdfKantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdf
Kantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdfSocial Samosa
 

Recently uploaded (20)

VIP High Profile Call Girls Amravati Aarushi 8250192130 Independent Escort Se...
VIP High Profile Call Girls Amravati Aarushi 8250192130 Independent Escort Se...VIP High Profile Call Girls Amravati Aarushi 8250192130 Independent Escort Se...
VIP High Profile Call Girls Amravati Aarushi 8250192130 Independent Escort Se...
 
꧁❤ Greater Noida Call Girls Delhi ❤꧂ 9711199171 ☎️ Hard And Sexy Vip Call
꧁❤ Greater Noida Call Girls Delhi ❤꧂ 9711199171 ☎️ Hard And Sexy Vip Call꧁❤ Greater Noida Call Girls Delhi ❤꧂ 9711199171 ☎️ Hard And Sexy Vip Call
꧁❤ Greater Noida Call Girls Delhi ❤꧂ 9711199171 ☎️ Hard And Sexy Vip Call
 
Russian Call Girls Dwarka Sector 15 💓 Delhi 9999965857 @Sabina Modi VVIP MODE...
Russian Call Girls Dwarka Sector 15 💓 Delhi 9999965857 @Sabina Modi VVIP MODE...Russian Call Girls Dwarka Sector 15 💓 Delhi 9999965857 @Sabina Modi VVIP MODE...
Russian Call Girls Dwarka Sector 15 💓 Delhi 9999965857 @Sabina Modi VVIP MODE...
 
Call Girls in Defence Colony Delhi 💯Call Us 🔝8264348440🔝
Call Girls in Defence Colony Delhi 💯Call Us 🔝8264348440🔝Call Girls in Defence Colony Delhi 💯Call Us 🔝8264348440🔝
Call Girls in Defence Colony Delhi 💯Call Us 🔝8264348440🔝
 
Data Science Jobs and Salaries Analysis.pptx
Data Science Jobs and Salaries Analysis.pptxData Science Jobs and Salaries Analysis.pptx
Data Science Jobs and Salaries Analysis.pptx
 
Data Warehouse , Data Cube Computation
Data Warehouse   , Data Cube ComputationData Warehouse   , Data Cube Computation
Data Warehouse , Data Cube Computation
 
High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...
High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...
High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...
 
Beautiful Sapna Vip Call Girls Hauz Khas 9711199012 Call /Whatsapps
Beautiful Sapna Vip  Call Girls Hauz Khas 9711199012 Call /WhatsappsBeautiful Sapna Vip  Call Girls Hauz Khas 9711199012 Call /Whatsapps
Beautiful Sapna Vip Call Girls Hauz Khas 9711199012 Call /Whatsapps
 
20240419 - Measurecamp Amsterdam - SAM.pdf
20240419 - Measurecamp Amsterdam - SAM.pdf20240419 - Measurecamp Amsterdam - SAM.pdf
20240419 - Measurecamp Amsterdam - SAM.pdf
 
Industrialised data - the key to AI success.pdf
Industrialised data - the key to AI success.pdfIndustrialised data - the key to AI success.pdf
Industrialised data - the key to AI success.pdf
 
B2 Creative Industry Response Evaluation.docx
B2 Creative Industry Response Evaluation.docxB2 Creative Industry Response Evaluation.docx
B2 Creative Industry Response Evaluation.docx
 
RadioAdProWritingCinderellabyButleri.pdf
RadioAdProWritingCinderellabyButleri.pdfRadioAdProWritingCinderellabyButleri.pdf
RadioAdProWritingCinderellabyButleri.pdf
 
VIP Call Girls in Amravati Aarohi 8250192130 Independent Escort Service Amravati
VIP Call Girls in Amravati Aarohi 8250192130 Independent Escort Service AmravatiVIP Call Girls in Amravati Aarohi 8250192130 Independent Escort Service Amravati
VIP Call Girls in Amravati Aarohi 8250192130 Independent Escort Service Amravati
 
From idea to production in a day – Leveraging Azure ML and Streamlit to build...
From idea to production in a day – Leveraging Azure ML and Streamlit to build...From idea to production in a day – Leveraging Azure ML and Streamlit to build...
From idea to production in a day – Leveraging Azure ML and Streamlit to build...
 
PKS-TGC-1084-630 - Stage 1 Proposal.pptx
PKS-TGC-1084-630 - Stage 1 Proposal.pptxPKS-TGC-1084-630 - Stage 1 Proposal.pptx
PKS-TGC-1084-630 - Stage 1 Proposal.pptx
 
Deep Generative Learning for All - The Gen AI Hype (Spring 2024)
Deep Generative Learning for All - The Gen AI Hype (Spring 2024)Deep Generative Learning for All - The Gen AI Hype (Spring 2024)
Deep Generative Learning for All - The Gen AI Hype (Spring 2024)
 
VIP High Class Call Girls Bikaner Anushka 8250192130 Independent Escort Servi...
VIP High Class Call Girls Bikaner Anushka 8250192130 Independent Escort Servi...VIP High Class Call Girls Bikaner Anushka 8250192130 Independent Escort Servi...
VIP High Class Call Girls Bikaner Anushka 8250192130 Independent Escort Servi...
 
VIP Call Girls Service Miyapur Hyderabad Call +91-8250192130
VIP Call Girls Service Miyapur Hyderabad Call +91-8250192130VIP Call Girls Service Miyapur Hyderabad Call +91-8250192130
VIP Call Girls Service Miyapur Hyderabad Call +91-8250192130
 
VIP Call Girls Service Charbagh { Lucknow Call Girls Service 9548273370 } Boo...
VIP Call Girls Service Charbagh { Lucknow Call Girls Service 9548273370 } Boo...VIP Call Girls Service Charbagh { Lucknow Call Girls Service 9548273370 } Boo...
VIP Call Girls Service Charbagh { Lucknow Call Girls Service 9548273370 } Boo...
 
Kantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdf
Kantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdfKantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdf
Kantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdf
 

Interactively Linked Eye Tracking Visualizations

  • 1. Interactively Linked Eye Tracking Visualizations Wietske de Bondt, Bartjan Henkemans, Jeroen Lamme, Gijs Pennings, Lászlo Roovers, and Michael Burch Fig. 1. Eye clouds visualization of Bordeaux (left) and heatmap visualization of Warschau (right) of the metro maps dataset [23]. Abstract—In this report, Argus, a tool for generating visualizations for eye tracking data is presented. There are numerous ways to visually present eye tracking data: heatmaps, scanpath, gaze stripes, eye clouds and AOI transition diagrams to name a few. On top of that, there are multiple ways to interact with these visualizations like selecting users, stimuli and fixation points to compare these features between the different visualizations. All of the aforementioned visualizations and interaction techniques are implemented into this tool. This report describes these visualizations and interactions including their advantages and disadvantages and how they are used in understanding eye tracking data. Furthermore, the report also looks at the structure of the dataset, how the tool runs on a server, how data is stored and the design philosophy of the website. Finally, the tool is previewed by means of an application example and the performance and limitations are discussed. Index Terms—Eye Tracking, Information visualization, User Interaction 1 INTRODUCTION Eye tracking is getting more and more attention, and not without reason. Insights that follow from eye tracking can be applied in several research fields, ranging from psychology and education to sports analysis and electrical engineering [15,16,18]. Advanced analytics are required for researchers to understand a great amount of data. In the broad field of eye tracking visualizations, one can easily lose sight. Blascheck et al. [3] have made a taxonomy for eye tracking visualizations, which helps when deciding on what visualizations to make. Although there are many eye tracking visualization tools already known, little attention has been given to the linking of these. This is especially important for data analysts, since interaction techniques provide a way to change the visualizations until they find new insights. Moreover, providing these visualizations in an online environment allows researchers to easily share their visualizations without installing proprietary software. Yi et al. [28] already have researched seven different interaction techniques, on top of which our research builds by implementing all of these interaction techniques in a web-based tool. The primary aim of this research project was to gain useful in- • Wietske de Bondt (1442880), e-mail: w.p.d.bondt@student.tue.nl. • Bartjan Henkemans (1414976), e-mail: b.henkemans@student.tue.nl. • Jeroen Lamme (1443062), e-mail: j.s.k.lamme@student.tue.nl. • Gijs Pennings (1441388), e-mail: g.p.s.pennings@student.tue.nl. • Lászlo Roovers (1439251), e-mail: l.roovers@student.tue.nl. sights into eye movement data by providing multiple web-based eye movement visualizations, which are interactively linked. This paper describes the implementation of a heatmap, a scanpath visualization, a transition diagram, eye clouds and a gaze stripes visualization in a multiple coordinated view [24]. The latter implies that multiple visualizations can be seen in one overview. We implemented our visualization tool using JavaScript as a pro- gramming language with Node.js as a runtime environment for the back-end, and D3.js to implement our visualizations in the front-end. Furthermore, we used a MySQL database to store and retrieve data. In this way, the interactive graphics can be seen in a web browser. This also allows other data analysts to upload their data and gain insights by seeing visualizations in several perspectives. The usefulness of our web-based visualization tool is illustrated by applying it to eye movement data of public transport maps, used in another eye tracking study [23]. 2 RELATED WORK Eye tracking visualizations consist of both a representation component as well as an interactive component. Concerning the representation part, there can be made a distinction between point-based and AOI- based (short for ‘area of interest’) visualizations. On the one hand, point-based visualizations use x- and y-coordinates of fixations op- tionally together with time related information. On the other hand, AOI-based visualizations use extra information about the data. Namely, they define AOIs, which are areas or objects of interests on a stimulus. AOIs need to either be defined by researchers themselves or with the
  • 2. help of clustering algorithms. However, there is not one clear guide- line for choosing AOIs, which makes AOIs less objective. Therefore, AOI-based visualizations are more difficult to realize, which has been addressed by Hessels et al. [17]. Several well-known point-based visualization techniques will now be described. Bojko [4] describes the usage of heatmaps and how they should be handled with care. The main advantages of heatmaps are that the color representation of a heatmap is very intuitive and that the heatmap is shown in the same figure as the stimulus itself such that little mental effort is needed to interpret a heatmap. A disadvantage of a heatmap is that it could hide the details of the stimuli [26]. In addition, a heatmap only shows the aggregation over time of visual attention and neglects the time axis in its visualization. A time-preserving visual attention map would have been a different approach to make sure the time aspect is implemented [6]. Another visualization which takes the time aspect into consideration is the scanpath visualization [25]. In this visualization, scanpaths of all users are shown over the actual stimulus. Hence, again, little mental effort is required to see relevant connections. However, a downside to this visualization is that visual clutter is inevitable with an increasing number of users, fixations and saccades. Gaze stripes and their use are described by Kurzhals et al. [21]. A gaze stripes visualization shows a timeline with cropped images from the used stimulus. The first advantage of this visualization is that it becomes easy to recognize common patterns in scanpaths because of the ordered timelines. Secondly, gaze stripes have a time component included which gives the visualization a very clear temporal overview. Unfortunately, with larger sample sizes it will become difficult to see all the data on one screen. Burch et al. [12] describe the usage of an eye cloud visualization. The visualization is based on thumbnails which grow in size the longer a fixation lasts. The main two advantages of an eye cloud are that, first of all, the areas that are focused on for the longest continuous period of time are easily noticed, and secondly, that the most commonly fixated upon areas are easily distinguished. One disadvantage of an eye cloud is that it has no temporal overview. Another is that when studying a dataset with a lot of fixations visual clutter is inevitable. An example of what can be done with AOIs is looking at transitions between AOIs. Transitions between AOIs are defined as a saccadic movement between two AOIs [3]. Kurzhals and Weiskopf [22] have looked into AOI transition trees. This visualization shows objects of in- terests and identifies patterns between transitions of objects. Moreover, Burch and Timmermans [11] describe the Sankey technique, which is another approach for visualizing AOI transitions. The main advantage of AOI transition visualizations is that a clear overview is given of how there has been looked at AOIs. Such as can be seen in a study of how newspapers are read [19]. However, disadvantages of AOI transition visualizations are that AOIs need to be defined correctly and that there is no temporal aspect. Concerning the interaction component of visualizations [1,2,7–10, 13,14,27], seven main interaction techniques between the user and a visualization system are provided in Yi et al. [28]. These categories are: select, explore, reconfigure, encode, abstract/elaborate, filter, and connect. An elaboration of this can be found in Sect. 4.4. In order to ease the creation of visualizations, several programming libraries have been developed. An example of this is D3.js, as is documented in [5]. 3 DATA MODEL AND PROCESSING In this section, we will cover the data format and how uploaded files are parsed. First, we will define some important terms. Next, we will discuss the exact format of the dataset. Finally, we will explain how the data is parsed. 3.1 Definitions In eye tracking experiments, participants are shown visual content. In this paper, we will refer to this content as the stimulus. In experiments, participants are either wearing eye tracking glasses or an eye tracker is mounted close to or integrated into, for example, a monitor. In both cases, they record where participants are looking dozens of times per second. These so-called gaze points are then aggregated based on area and time to fixations, which are defined by their position and duration [3]. 3.2 Data Model The data for our visualization tool must be uploaded as a ZIP archive. The structure and format were inspired by Netzel et al. [23]. All files inside are searched until we find a folder named ‘stimuli’; the images inside are stored to use them for the visualizations. The first comma- separated values file (CSV) we find is parsed as described below. In the CSV file, fields must be separated by tabs and records by newlines. There are eight columns in total. Each record represents a fixation of some user (‘user’) for some stimulus (‘StimuliName’) at some time (‘Timestamp’). The columns ‘FixationDuration’, ‘Mapped- FixationPointX’, and ‘MappedFixationPointY’ describe the duration and position of the fixations respectively. Only the stimulus and user columns contain text; the other four columns contain numerical data. The remaining two columns (‘FixationIndex’ and ‘description’) are not used for visualizations in our tool. 3.3 Data Parsing Before the data can be used to create visualizations, the CSV file is processed. A unique 4-character ID is generated which consists of digits and lowercase letters, which allows the back-end to store and identify different datasets. Also, researchers can use this ID to request their previously uploaded data. The first line of the CSV file is assumed to contain the column names. The ‘FixationIndex’ and ‘description’ columns of the CSV file are discarded since they do not contain informa- tion we use while generating visualizations. Then, in the ‘StimuliName’ column, encoding errors are fixed (e.g. ‘ü’ is replaced by ‘ü’) and all diacritics (accents) are replaced by the corresponding ‘bare’ letter (e.g. ‘ü’ is replaced by ‘u’) to prevent any future encoding and displaying errors. Finally, using the ‘FixationDuration’ column, the ‘Timestamp’ column is recalculated such that it starts at 0 for every user interact- ing with some stimulus. Then, the dataset is loaded into the database together with its newly assigned ID. There are some checks in place to make sure uploaded data is in the right format. For instance, if a record does not contain the correct number of fields, parsing is terminated. However, not all aspects are checked. For example, the first line containing the column names is skipped completely. This implies the columns must be provided in the correct order since the parser will not adapt. Other (edge) cases are touched on in the discussion. 4 VISUALIZATION TOOL: ARGUS In this section, we will discuss what tools and frameworks we used for the back-end and the front-end, and why we made these choices. We will also explain the layout of and design choices for the graphical user interface. Lastly, we will list the eye tracking visualizations and interaction techniques we implemented. 4.1 Back-end Architecture Naturally, to host our web application we required a server. However, due to the nature of our project, a server that serves static (i.e. simple unchanging text) web pages would not suffice. We needed a server that supported file uploads and that could parse and store data in a database, in addition to serving interactive web pages. 4.1.1 Server We chose an off-site server from DigitalOcean with 1 virtual CPU, 3 gigabytes of random access memory, and 25 gigabytes of storage space. This server’s operating system is a 64-bit version of Ubuntu 20.04 (LTS) on which Node.js and a MySQL server runs. As this project will see sporadic use and the graphs are rendered in the front-end, hence utilizing the processing power of the user and not that of the server, a more powerful server is not necessary. Currently, a dataset of roughly 120,000 entries is uploaded, parsed, stored and then served in under three seconds. However, if the project would see more use, the server could easily be resized to accommodate the increase in
  • 3. Fig. 2. The visualizations page of Argus. Here we can see the data selection panel (1), the timestamp slider (2), the actual visualizations (3), the visualization selection menu (4), the local tool panel (5) and global tool panel (6). traffic. We will return to the performance of our tool in the Discussion and Limitations. 4.1.2 Node.js For our web server, we chose Node.js, a JavaScript runtime, in com- bination with Express, a web framework. This has the advantage that we use JavaScript for both the front-end and the back-end, allowing people to easily contribute to both. Since this is a common combination with a large community, ample documentation can be found online. The flexible and minimalist nature of Express results in only a small amount of overhead and fast development times. In addition, Node.js has excellent support for asynchronous programming, which is well suited for our project (e.g. databases, HTTP, parsing files). 4.1.3 MySQL For this project, storing the datasets uploaded by users was a vital part of the functionality of the website. Data storage is convenient because users do not have to re-upload their dataset and stimuli after closing a viewing session. Storing the datasets in a database is also convenient for sharing visualizations as it allows users to send a link of their visualizations to other researchers, instead of sending the dataset and the stimuli for them to upload to the website. When choosing how to store data, there were two major options: an SQL server or a NoSQL server. For this project, it was elected to use MySQL to store the imported datasets, as the information that will be processed has a precise and rigid structure. A NoSQL database would not enforce this structure. Furthermore, there was a need to store extra information about our datasets like a title, a description and a timestamp of upload. Through the relational database we can link this extra information to the datasets and the fixation points. However, the disadvantage of using MySQL is slower read times in comparison to a NoSQL. This is because the server does relatively complex calculations to enforce the structure of its tables and to en- force relations. The upside of enforcing structure and maintaining relations outweighed the downside of having slightly slower read times. Therefore, MySQL was chosen. The MySQL server is hosted on the same machine as the Node.js server. For security reasons, the server has external connections disal- lowed; only local connections, like the Node.js web server, are allowed. 4.2 Front-end Architecture The front-end of our website is what enables the interaction between the user and the tool. It has been constructed with the aim of providing a pleasant experience for researchers. How this was done will be explained in the upcoming section. 4.2.1 General Front-end Structure The front-end of our project is built using HTML, CSS, and JavaScript. In particular, we used the D3.js library for visualizations and other front-end components. The usage of Font Awesome provides the small icons you see throughout the page. There are two primary reasons for this choice in the design. For one, there is no necessity for big frameworks or external libraries for the front-end. The main purpose of our website is providing visual- ization for the user’s datasets, and for that, only Javascript and D3.js are sufficient. We do not want to distract the user with small, redundant, or flamboyant features, and therefore have kept animation and dynamic interactions to a minimum. Secondly, we want our website to run as fast as possible. Adding frameworks and libraries causes overhead for the site, slowing down the rendering of the visualizations for the data, as an example. By keeping the number of external imports to a minimum, the website will not hamper the data analysis. 4.2.2 The “Data Selection” tab After entering our site, the user will be directed to the data selection page of Argus. Here, there are three option to get to analyzing the dataset of interest. 1. There is the option to upload a new dataset. The user will need to enter a name for the dataset and (optionally) a short description for the dataset for future reference, along with the dataset itself. After the dataset has been successfully uploaded, the user will be redirected to the visualizations tab with the uploaded data selected, from where the data analysis can begin. 2. Alternatively, it is possible to browse datasets that were previously uploaded by you or others. Here you will see the name and short description of all datasets, along with their IDs. The user either can copy the ID or click on the ID to be redirected to the visualizations tab. 3. You can also access a dataset when you know its unique ID. When you use this option you will be redirected to the visualizations tab and the correct data will be loaded. 4.2.3 The “Visualizations” tab This tab is central to Argus (Fig. 2). Here, the user will be able to create and alter insightful figures for their data analysis. The layout was made with user-friendliness in mind. We will go over each component of this tab. To begin with, there is a panel in which the user can control which dataset, stimulus, and users are going to be viewed in the visualizations (1). Additionally, there is a timestamp slider (2) next to the data se- lection panel. The user can filter the current dataset for specific time intervals through the use of this widget. Then, there are the visualizations themselves (3). In the top-left corner of the visualization container, the user has the option to choose which visualization they want to show up in the corresponding box (4). When the visualization is rendered, the user is free to pan and zoom the visualizations to their liking. Since all of our visualizations work with vector graphics, the quality of the visualization will not be harmed by a scaling operation apart from the background image, which is not an SVG itself. Within each visualization, the user has a handful of options (5). The wrench opens a menu which can be used to alter features in the
  • 4. Fig. 3. A heatmap generated from the Berlin metro map data used in [23]. The frequency of looking is represented by colors where by default neon green means a high frequency. The more blue it gets, the lower the frequency. No coloring at all indicates that there has not been looked at, or not sufficiently long enough to be represented in the heatmap. When hovering over the heatmap, the according density threshold is shown. visualization, like the colors or the size of certain components. The camera icon is used for saving a PNG-image of the currently displayed visualization to the user’s local device. At last, the magnifier is there to reset the zoom in the visualization, reverting it to its default zoom state. Finally, at (6), there are some global features for the user. At the left, there is a button to export the current visualization settings. This provides the user with an URL that can be entered at any time in the future to gain exactly the same visualization. To the right of that, is a checkbox to toggle the linking of zooming and panning within a visualization. This only works when the view mode is set to “split”, which is part of the next feature we mention. The rightmost feature enables the user to switch between “single” and “split” display. This determines whether one or two visualizations will be shown on the page. 4.3 Visualizations We implemented five different visualization techniques, each providing a different perspective of the data to the user. 4.3.1 Heatmap This point-based visualization (see Fig. 3 for an example) provides an overview of how much there has been looked at areas of the stimulus. Our reason for implementing a heatmap [4] is because it gives an easy overview for the researcher of the areas which were looked at the most or longest. Technically speaking, we see a computed density plot of where the users have looked. The heatmap is made using D3.js with a contour density function. The color is computed by taking into account the fixation duration of each fixation and the specific place of the fixa- tion. A contour density function will be applied and the according x,y coordinates will be colored to their threshold. For users convenience, certain options are implemented. The most significant one is that users have the possibility to choose which band- width to apply to the contour density function. A higher bandwidth corresponds with bigger areas on the stimuli that are colored, while a lower bandwidth makes these areas more compact. The default band- width of the Gaussian kernel is 20. A tooltip provides the user with a threshold on the density. A slider for the opacity of the density areas has been implemented as well. The default setting 40, corresponds to an alpha value of 0.4. A lower opacity makes it easier to see the under- lying stimulus and distinguish contour shapes. In addition, users have the possibility to change the primary and secondary heatmap colors. Fig. 4. A scanpath visualization of the Tokyo metro map for one specific user (p26) from [23]. This visualization allows us to carefully follow this user’s eye movements. A larger radius corresponds to a longer fixation duration. The numbers in the circles represent the order. Fixations 30 and 31 are selected. This can allow researchers to choose colors that comply to a specific stimulus or give a certain impression. 4.3.2 Scanpath Visualization The scanpath visualization is a point-based visualization that can be used to follow the ‘path’ a user takes while ‘scanning’ a stimulus. A scanpath is defined as an alternating sequence of fixations (as defined in Sect. 3.1) and saccades. In turn, saccades are swift eye movements between two fixations [3]. While saccades are not explicitly included in the dataset, they can be plotted by connecting successive fixations. This means, however, that we cannot determine the saccade duration. This visualization is implemented using D3.js and vector graphics (SVG). The current stimulus is used as the background. Then, for each user, the scanpath is rendered in a unique color (to easily distinguish between different scanpaths). Fixations are drawn as circles; saccades are drawn as lines between them. In each circle a number is drawn, which corresponds to the order of the fixation, starting at 1. A longer fixation duration corresponds to a larger fixation radius, at a decreasing rate (i.e. a fixation with a duration that is twice as long corresponds to a radius less than twice as large). Saccades, on the other hand, have a fixed thickness, since the saccade duration is unknown. Each scanpath is contained in a separate group element, which allows us to easily hide or show a subset of users. Multiple scanpaths could be shown at the same time. When hovering over one, it changes color (transparency is removed completely) and is raised to the foreground, so it can be clearly inspected. Additionally, a tooltip of the current fixation is shown, which includes its user, times- tamp, coordinates, and duration. If this fixation is clicked once it is selected, staying highlighted even when the mouse moves to another. If it is double-clicked, the data is filtered on the user of the fixation. This means all scanpaths except the highlighted one are hidden. If a second visualization is shown simultaneously, its data will also be filtered. A common problem with scanpath visualizations is that they intro- duce visual clutter [3]. It is therefore possible (as described above) to select individual users. Moreover, the opacity of the scanpath can be changed. When (partly) transparent, not only the stimulus but also other fixations and saccades can be seen below the scanpath. Areas where many fixations are stacked on top of each other can even be identified since they will be darker than other areas. To further decrease clutter, fixations circles can be turned off altogether; only the saccades will be visible. You are further able to customize your plot by choosing all colors.
  • 5. Fig. 5. A transition diagram generated from the Bologna metro map data used in [23]. The AOIs are represented by circles, The percentage of the fixations in the cluster compared to the total fixations is also written in the circle. Arrows represent the transitions between the AOIs. 4.3.3 AOI Transition diagram Transition diagrams are based on areas of interest. In this project, a clustering algorithm was used to create AOI’s. The chosen clustering algorithm was k-means clustering since it is one of the fastest cluster- ing algorithms. The k-means clustering algorithm is implemented in keeping with the definition given by Iváncsy et al. [20]. At the start of the algorithm, a number of random points is chosen determined by the number of clusters the researcher wants. These points will be the starting points of the clusters. The coordinates of these starting points lie in the range of maximum and minimum x and y coordinates of the fixation points. Thereafter, each fixation will be assigned based on which cluster is the closest. When each fixation is assigned, a new center of each cluster will be calculated by taking the mean of the x and y coordinates of the fixations in the cluster. This process will be repeated until the clusters do not change or 20 loops have been made. This restriction is made to make sure that no endless loops will take place. In our testing, it was almost always the case that clusters stopped changing before 20 loops had taken place. The transition diagram is used to show the size and position of areas of interest and the transitions between the AOIs as can be seen in Fig. 5. It was chosen to make the in context transition diagram. This means the clusters are shown on the stimulus since this makes it easy for researchers to see where the clusters are on the stimulus. This visualization was made using an SVG and the D3.js library in JavaScript. A cluster is represented by a circle whose size is dependent on the percentage of total fixations in the cluster. A transition is rep- resented by an arrow. The thickness of this arrow is dependent on the percentage of outgoing transitions in the arrow (transitions that stay within an AOI are not taken into account). For each pair of AOIs there are two arrows one for each direction of the transition. A downside of using the k-means clustering algorithm is its use of random starting points. It could be that a starting point was chosen where the cluster does not have any fixation that is closest to the cluster. This would create an empty cluster. To remedy this a new random point will be selected to replace the empty cluster. Another downside is that a different clustering can be made with the same data. This is caused by the random point selection that is used in k-means clustering, therefore this problem can not be solved without changing the clustering algo- rithm. Transition diagrams do not show which fixations are assigned to which cluster. Therefore an option is added to see all fixations colored in the color of the cluster. The standard clustering does not take the fixation duration into account. However, an option is added to take fixation duration into account when clustering. Fig. 6. An example Eye Cloud of the Tokyo subway map looked at by a large number of people [23]. 4.3.4 Eye Clouds The eye clouds visualization is based on the tool developed by Burch et al. regarding attention clouds [12]. It displays snapshots of the fixation points in the data set in a circle with a radius relative to the fixation duration. The centers of the eyes (small circles) are the (x,y)-coordinates of the fixation points provided by the dataset. The radius of each individual eye is determined by a mapping from the fixation duration values to a set of values that makes the circles reasonably large to investigate them. In our case, this is realized using the built-in function SCALESQRT, provided by D3.js. The eyes are small SVG figures. The way they are held together is through the use of the force simulation system of D3.js. This system has been programmed in such that all eyes will be attracted towards the center of the container. So, even when a circle is dragged away from the eye cloud, which was implemented for small customizations to the overall composition to the user’s liking, the eye will always try to return to the center. However, the eyes are programmed so that they will never overlap with each other, the center will only be occupied by one (arbitrary) eye, whilst all the others are going to align around this eye. For clarity’s sake, each eye also has a small force-field around itself, making it so that the eyes have a small amount of space in between one another. When the user hovers over a single circle, the circle will receive a border in a distinct color which the user can alter, but its default is red. Moreover, a circle can be clicked which toggles a, by default, green border as selection indicator. Eye clouds are great for getting a general overview of what areas of the map were looked at most. The downside of Eye clouds visualization, however, is that they do not display the location of the snapshot in the eye on the whole stimulus. This makes it hard to analyze the results without a second visualization next to it and using the selection interaction technique to highlight the areas in the snapshots. 4.3.5 Gaze Stripes The gaze stripes visualization (as seen in Fig. 7) is a point-based vi- sualization which shows the selected stimulus in combination with a timeline. In the gaze stripes, on the x-axis, the time aspect can be seen and on the y-axis the specific users. For a certain time-interval, the area around the fixation point which has been looked at the most in this interval will be copied and turned into a thumbnail. This visualization
  • 6. Fig. 7. An example of the gaze stripes applied on a map of Tokyo’s metro system [23]. Selected fixations are represented by a black box around the thumbnail. gives insight into how fixations are related to their point in time. The gaze stripes visualization is implemented using D3.js and vector graphics (SVG). For every stimulus, we find the maximal time tmax over all users. Now we define the time interval for this stimulus as the maximal time divided by the number of images we would like. So ∆t := tmax n . Next, we determine for each time interval [ti−1,ti] (where i = 1,...,n and ti = i·∆t) which fixation point has been looked at the most. This point will be cropped as a rectangle around that point with a user-specified size and put in the specific user lane. For a better overview, the gaze stripes can be shown together with e.g. the scanpath visualization. Then when selecting a thumbnail, the related fixation in the scanpath visualization will be indicated by a black circle. A drawback is that the thumbnails must be very small in order to see the whole time-overview. Zooming might remedy seeing small pictures, but then the overview is lost. Also, changing the radius r of the cropping might help to see pictures more/less in detail. One must keep in mind that we only show the fixation which has been looked at the longest in a certain time interval. Therefore, it can be the case that some fixation points will be neglected. The option of choosing the amount of pictures you want can resolve this to some extent. A tooltip makes sure you can see how long a participant stares at a fixation point. What is more, our gaze stripe does not show when the participant started and stopped as the participant could have started looking in the time interval before and after the interval in which his gaze is shown. Unfortunately, this problem cannot be remedied with our current implementation. 4.4 Interaction Techniques This section will give a brief overview and implementation example of the seven interactions techniques presented by Yi et al. [28]. • The select interaction technique is used to highlight certain points of data that are of interested to a researcher. This can be useful to keep track of data when changing settings or switching to a different visualization. Every representation of a fixation can be clicked after which it will get a border in all visualizations. This can be seen in Fig. 7. • The explore interaction technique is used to view different parts of a visualization and see new parts of the data without changing the selected data. We have implemented panning such that researchers can drag and drop the visualization to see different parts of the visualization. 0 2 4 6 8 10 12 14 16 18 number of datasets 0 500 1000 1500 2000 2500 3000 time (ms) (a) parse load get 211 212 213 214 215 216 217 218 number of records 21 23 25 27 29 211 213 (b) Database performance Fig. 8. (a) The time it takes to perform database tasks depending on how many datasets have already been uploaded, each containing roughly 120k records. Loading data into the database takes the most time, especially if it is empty. Retrieving data from the database is trivial compared to the other tasks. In short, the size of the database (at levels we can expect) does not seem to impact the performance. (b) The time it takes to perform database tasks depending on how large the uploaded dataset is. The figure is scaled logarithmically to better spread out measurements. There seems to be a linear relation between the number of records and the time it takes to perform database operations. • The reconfigure interaction technique makes it possible to rear- range different aspects of a visualization. This makes it possible for researchers to find a new perspective on their data by altering how it is presented to them. A good example of how this can be used is seen in the eye clouds visualization. In this visualization, it is possible to rearrange the individual eye clouds to find different patterns in the data. • The encode interaction technique allows users to customize the visualizations by changing how the visualization is displayed. This includes but is not limited to: colors and sizes of elements in a visualization. For example, in the transition diagram, it is possible to change the colors of different elements in the visualization. By changing the colors of one type of element it is possible to distinguish these elements. • The abstraction/elaborate interaction technique makes it possible to change the level of detail presented. This is needed to change from an overview of all the data to a specific point in the data. One option to abstract/elaborate is to zoom in/out. This is implemented in all our visualizations. • The filter interaction technique allows researchers to select dif- ferent parts of the data to visualize. Researchers can filter out unwanted data, this data is not removed but will not be used to generate the visualization. It is possible to filter users and the timestamps of fixations. • The connect interaction technique can be split into two parts. The related data part and the hidden data part. The related data part allows researchers to see connections be- tween data points. An example can be seen in the scanpath visualization where hovering over a fixation will also highlight all fixations of the same user. The hidden data part gives researchers the possibility to see data that is not shown normally in the visualization but might be of use. For example, in the transition diagram, it is possible to see all fixations colored in the color of the AOI they are part of. 5 APPLICATION EXAMPLE: TOKYO In this section, eye tracking data will be used and analyzed in order to give an example as to how you can gather useful insights by using Argus. The metro map we will look at is Tokyo’s (09 Tokyo S1). On this map, the starting place is indicated by a green cursor and the destination
  • 7. Fig. 9. From left to right, the scanpath, the heatmap and the AOI transition diagram of Tokyo’s subway map. is marked by a large red-white target. Keep in mind, however, that this analysis is done using a specific set of eye tracking data provided to us by other researchers. Looking at the scanpath visualization for all users combined (Fig. 9), it immediately jumps out that there is a very dense line of paths on the straight line from start to finish. A lot of people have looked at the space between their point of interest and found their route. However, there is also a great deal of paths going up and left from the start and even some to the right, under the aforementioned dense line. Continuing with the AOI transition diagram (Fig. 9), we observe yet again that the two biggest clusters lie at the start and end of the track. Here, however, the cluster underneath the destination is indicated as the next largest cluster. This differs from what we observed in the scanpath, where it looked like the line in between start and end was more densely looked at, followed by the area to the left of the destination. More research will be needed for the final conclusion. Next, we look at the heatmap visualization (Fig. 9). It appears to be a consistent result that the start and destination get the most attention because, again, both appear with high intensity. In this visualization, the area to the left appears to be more intensely fixated at than the space in between and to the right of the start and destination. This result is consistent with the results of the scanpath visualization. In case the conflicting results with the AOI transition diagram leave to doubt, however, more visualizations can be used for further analysis. Moving on, the gaze stipes (Fig. 7) and eye clouds (Fig. 6), we conclude one final time that the most attention is given to start and target. Apart from that, there appear to be a lot of snapshots in the gaze stripes and a rather high number of large eyes, in which the Shinjuku-sanchome line is displayed. This is again the area to the left of the target and above the start. 6 DISCUSSION AND LIMITATIONS In this section, we will explain and reflect on some of the choices we made while designing our tool, as well as outline its limitations and list some improvements for future work. 6.1 Filtering While we do provide some filtering options, our tool lacks others. For instance, although our tool supports showing either one user or all simultaneously, it is not possible to filter users individually (i.e. allowing any combination). Of course, it is possible to open two instances of our tool, showing one user each, but this is not a real fix. A much more flexible solution would be to have a list of users that could be checked or unchecked individually. Similarly, while it is possible to filter the data based on a timestamp interval, our tool cannot filter on fixation duration. Such an option would have been interesting, since it would have been possible to filter out insignificant fixations, reducing clutter for the scanpath and eye clouds. In addition, by looking at the fixations with the longest duration, areas of interest could be more easily identified. 6.2 Clustering We implemented only one way of data clustering: a k-means clustering algorithm (see Sect. 4.3.3). Often, AOIs are based on semantic informa- tion [3]. In our case, however, this is not possible. While the number of clusters can be specified by the user, we do not provide many other options regarding clustering. Since the result of the algorithm depends on the initial position of the clusters (which is random) [20], running the algorithm again on the same dataset will not guarantee to produce identical results. Moreover, k-means clustering only works well for spherical data and is not well suited to noise [20], both of which may not be true for our data. The flexibility of our tool could be improved by allowing users to specify AOIs themselves, or letting them choose between different clustering algorithms. 6.3 User Interface For our user interface, we chose not to use any existing frameworks like Bootstrap or React. We opted instead to implement everything ourselves from scratch, using plain HTML and CSS. This has the benefit of no overhead, meaning improved performance. On the other hand, development time is increased and additional bugs could be introduced since we have to implement every UI feature ourselves. In the end, though, we are happy with our choice, since it offered us a lot of flexibility and made our tool quick and therefore easy to use. We do worry, however, that our design will not always be intuitive for users. For example, due to technical reasons, it is not possible to open the same type of visualization twice; for users, this may not be clear. Furthermore, there are many options, some of which are hidden behind menus. While they are all accompanied by a label or tooltip, their function may still not be immediately obvious. We think that using a built-in tutorial, one that guides the users step-by-step through the options, would have improved the usability of our tool. 6.4 Database Performance It only takes a few hundred milliseconds for the visualizations to be shown after selecting the data (dataset, stimulus, user), if not less. As a matter of fact, the most time-consuming process of our tool is data parsing. That is why we have analyzed its performance. For this we used the metro maps dataset from Netzel et al. [23] and our production server (see Sect. 4.1.1). Note that these tests do not include the unzipping and copying of images. We ran each test two times and took the average to reduce the effect of fluctuations. On average, the three operations combined only take a very respectable three seconds, excluding HTTP requests (see Fig. 8a). While performance could have been improved using a NoSQL database (at the cost of less structure), our current solution is certainly faster than storing plain CSV files. This is especially true since our tables are indexed, meaning retrieval is sped up at the cost of load time. This trade-off can clearly be seen in Fig. 8a. The same figure shows that the performance of the database does not depend on how ‘full’ it is. Fig. 8b shows that database operations take longer, the larger the uploaded dataset is. However, the time it takes to perform tasks heavily fluctuates. Indeed, we have had instances where ‘load’ took a staggering 10 seconds. We think there are several reasons for this. Firstly, our website is hosted on a shared system, which means that temporary network surges for other servers on the same system could impact the performance of our website and database. Secondly, the fact that our database is a (local) server itself can also contribute to the fluctuations. This is a trade-off between network overhead and ease of development. 6.5 Security and Data Validation Since we are using an SQL database, our tool is vulnerable to SQL injection attacks. While we did try to prevent these types of attacks by sanitizing the input, we did not focus on (database) security for this project (e.g. we did not consider second-order injections), since our tool does not handle sensitive user data. Cross-site scripting (XSS) is another potential vulnerability that we did not look into. Another area our tool can be improved in is input data validation. Currently, as described in Sect. 3.3, there is only very limited check- ing with regard to the data format. For example, both the order of columns and the column data types (e.g. numerical or text-based) are not checked, but just assumed correct. Also, no sanity checks (e.g. whether coordinates or timestamps are realistic) or stimulus checks (i.e. whether the specified stimuli actually exist) are performed. Even though our upload tool will still reject most (seriously) malformed files, it does not provide clear feedback to the user. Moreover, when an
  • 8. uploaded dataset is rejected due to its size, the user is also not clearly informed. In this regard, user experience can be improved. 7 CONCLUSION In this paper, Argus, a fast, user-friendly visualization tool for eye track- ing data, was presented. We discussed the data model of the dataset and how the data was parsed by, among other things, removing the diacritics and normalizing the timestamp. Furthermore, this report looked at how the back-end and front-end work together to intuitively and efficiently guide researchers through the process of generating visualizations for their data. Additionally, we closely analyzed the five implemented vi- sualizations, namely the heatmap, scanpath visualization, gaze stripes, eye clouds and AOI transition diagram by looking at the goal of each visualization, its implementation and the drawbacks. We also reviewed how the seven interaction were successfully integrated in this tool. On top of that, we showed our tool in action with an application example to show the abilities of Argus. Finally, we considered the limitations of Argus and what can be done to improve the tool. Future research should consider additional forms of filtering users and other clustering algorithms, to name a few. ACKNOWLEDGMENTS The authors wish to thank a handful of people who contributed to the success of this project either directly or indirectly. First and foremost, we would like to thank Catalin Ionescu for his excellent tutoring. With his guidance, we were able to learn how to cooperate in a new and challenging environment. Secondly, we wish to express our appreciation to the staff of this course who made our project possible, namely Elisabeth Melby and Michael Burch. The feedback given by Elisabeth Melby on the cooper- ation through Scrum and the feedback given by Michael Burch on the interim paper gave the authors a chance to grow and improve. REFERENCES [1] F. Beck, M. Burch, and S. Diehl. Matching application requirements with dynamic graph visualization profiles. In Proceedings of 17th International Conference on Information Visualisation, IV, pp. 11–18. IEEE Computer Society, 2013. [2] F. Beck, M. Burch, T. Munz, L. D. Silvestro, and D. Weiskopf. General- ized pythagoras trees: A fractal approach to hierarchy visualization. In Proceedings of International Conference on Computer Vision, Imaging and Computer Graphics - Theory and Applications - International Joint Conference, VISIGRAPP, vol. 550 of Communications in Computer and Information Science, pp. 115–135. Springer, 2014. [3] T. Blascheck, K. Kurzhals, M. Raschke, M. Burch, D. Weiskopf, and T. Ertl. Visualization of eye tracking data: A taxonomy and survey. Comput. Graph. Forum, 36(8):260–284, 2017. doi: 10.1111/cgf.13079 [4] A. Bojko. Informative or misleading? heatmaps deconstructed. In J. A. Jacko, ed., Human-Computer Interaction. New Trends, 13th International Conference, HCI International 2009, San Diego, CA, USA, July 19-24, 2009, Proceedings, Part I, vol. 5610 of Lecture Notes in Computer Science, pp. 30–39. Springer, 2009. doi: 10.1007/978-3-642-02574-7 4 [5] M. Bostock, V. Ogievetsky, and J. Heer. D3 data-driven documents. IEEE Trans. Vis. Comput. Graph., 17(12):2301–2309, 2011. doi: 10.1109/TVCG .2011.185 [6] M. Burch. Time-preserving visual attention maps. Intelligent Deci- sion Technologies 2016 Smart Innovation, Systems and Technologies, p. 273–283, 2016. doi: 10.1007/978-3-319-39627-9 24 [7] M. Burch, M. Hlawatsch, and D. Weiskopf. Visualizing a sequence of a thousand graphs (or even more). Computer Graphics Forum, 36(3):261– 271, 2017. [8] M. Burch, M. Höferlin, and D. Weiskopf. Layered TimeRadarTrees. In Proceedings of 15th International Conference on Information Visualisa- tion, IV, pp. 18–25. IEEE Computer Society, 2011. [9] M. Burch, S. Lohmann, F. Beck, N. Rodriguez, L. D. Silvestro, and D. Weiskopf. Radcloud: Visualizing multiple texts with merged word clouds. In Proceedings of 18th International Conference on Information Visualisation, IV, pp. 108–113. IEEE Computer Society, 2014. [10] M. Burch, C. Müller, G. Reina, H. Schmauder, M. Greis, and D. Weiskopf. Visualizing dynamic call graphs. In Proceedings of the Vision, Modeling, and Visualization Workshop 2012, pp. 207–214. Eurographics Association, 2012. [11] M. Burch and N. Timmermans. Sankeye: A visualization technique for AOI transitions. In A. Bulling, A. Huckauf, E. Jain, R. Radach, and D. Weiskopf, eds., ETRA ’20: 2020 Symposium on Eye Tracking Research and Applications, Short Papers, Stuttgart, Germany, June 2-5, 2020, pp. 48:1–48:5. ACM, 2020. doi: 10.1145/3379156.3391833 [12] M. Burch, A. Veneri, and B. Sun. Eyeclouds: A visualization and analysis tool for exploring eye movement data. In Proceedings of the 12th Interna- tional Symposium on Visual Information Communication and Interaction, VINCI 2019, Shanghai, China, September 20-22, 2019, pp. 8:1–8:8. ACM, 2019. doi: 10.1145/3356422.3356423 [13] M. Burch and D. Weiskopf. A flip-book of edge-splatted small multiples for visualizing dynamic graphs. In Proceedings of the 7th International Symposium on Visual Information Communication and Interaction, VINCI, p. 29. ACM, 2014. [14] M. Burch and D. Weiskopf. On the benefits and drawbacks of radial diagrams. In W. Huang, ed., Handbook of Human Centric Visualization, pp. 429–451. Springer, 2014. [15] N. Charness, E. M. Reingold, M. Pomplun, and D. M. Stampe. The perceptual aspect of skilled performance in chess: Evidence from eye movements. Memory Cognition, 29(8):1146–1152, 2001. doi: 10.3758/ bf03206384 [16] T. Fawcett. The eyes have it: Eye tracking data visualizations of viewing patterns of statistical graphics. All Graduate Plan B and other Reports, 787:1–6, May 2016. [17] R. S. Hessels, C. Kemner, C. Boomen, and I. T. Hooge. The area-of- interest problem in eyetracking research: A noise-robust solution for face and sparse stimuli. Behavior Research Methods, 48(4), Dec 2016. doi: 10. 3758/s13428-015-0676-y [18] K. Holmqvist. Eye tracking: a comprehensive guide to methods and measures. Oxford University Press, 2011. [19] K. Holmqvist, J. Holsanova, M. Barthelson, and D. Lundqvist. Reading or scanning? A study of newspaper and net paper reading., pp. 657–670. Elsevier, United States, 2003. In cooperation with Humanistlaboratoriet, Lund university. [20] R. Iváncsy, A. Babos, and C. Legány. Analysis and extensions of popular clustering algorithms. 01 2005. [21] K. Kurzhals, M. Hlawatsch, F. Heimerl, M. Burch, T. Ertl, and D. Weiskopf. Gaze stripes: Image-based visualization of eye tracking data. IEEE Trans. Vis. Comput. Graph., 22(1):1005–1014, 2016. doi: 10.1109/TVCG.2015. 2468091 [22] K. Kurzhals and D. Weiskopf. AOI transition trees. In H. R. Zhang and T. Tang, eds., Proceedings of the 41st Graphics Interface Conference, Halifax, NS, Canada, June 3-5, 2015, pp. 41–48. ACM, 2015. [23] R. Netzel, B. Ohlhausen, K. Kurzhals, R. Woods, M. Burch, and D. Weiskopf. User performance and reading strategies for metro maps: An eye tracking study. Spatial Cognition & Computation, 17(1-2):39–64, 2017. doi: 10.1080/13875868.2016.1226839 [24] J. C. Roberts. State of the art: Coordinated multiple views in exploratory visualization. Fifth International Conference on Coordinated and Multiple Views in Exploratory Visualization (CMV 2007), 2007. doi: 10.1109/cmv. 2007.20 [25] L. F. Scinto, R. Pillalamarri, and R. Karsh. Cognitive strategies for visual search. Acta Psychologica, 62(3):263–292, 1986. doi: 10.1016/0001-6918 (86)90091-0 [26] O. Spakov and D. Miniotas. Visualization of eye gaze data using heat maps. Eletronics and Electrical Engineering, 2(74):55–58, 2007. [27] C. Vehlow, M. Burch, H. Schmauder, and D. Weiskopf. Radial layered ma- trix visualization of dynamic graphs. In Proceedings of 17th International Conference on Information Visualisation, IV, pp. 51–58. IEEE Computer Society, 2013. [28] J. S. Yi, Y. ah Kang, J. T. Stasko, and J. A. Jacko. Toward a deeper understanding of the role of interaction in information visualization. IEEE Trans. Vis. Comput. Graph., 13(6):1224–1231, 2007. doi: 10.1109/TVCG. 2007.70515