1. Information Visualization Interfaces for
Multi-Device Synchronous Collocated Collaboration
Samuel Cox, Paul Craig
Dept. of Computer Science and Software Engineering
Xi’an Jiaotong-Liverpool University
Suzhou, China
samuel.cox@xjtlu.edu.cn
Abstract—In recent times, the use of multiple devices by
individuals has become prevalent, and the way we interact with
data has been changing. Ownership of smartphones, tablets,
smart clothing, and more traditional desktops and displays is
commonplace, and while interaction by mouse is still popular,
manipulation through touch, speech and motion are used in-
creasingly in its stead. Yet these multiple devices are largely used
separately from one another.
This paper will explore the benefits of using multiple devices
for synchronous collocated collaborative work. That is multiple
people, working at the same time, in the same place with multiple
devices.
I. INTRODUCTION
As technology has advanced and costs have become less
prohibitive, there has been a greater availability of connected
and mobile devices. Whereas in the past sending an email or
shopping online required a desktop computer, this can now be
achieved on the go using a smart-phone or tablet.
This increase in devices comes hand-in-hand with greater
quantities of stored data. Transactions, business data, and
medical records are just some of the items of data stored in
ever greater quantities.
However looking through all this data in text format is
cumbersome, and trends are difficult to spot. For this reason,
visualizations have become more popular. Visualizations allow
users to understand their data more easily and are now
integral to many disciplines such as finance and scientific
research. Interactive visualizations on computers are known as
information visualizations, and can benefit data understanding,
and analysis.
Information visualization can be used collaboratively to
enable a number of individuals to analyse data. Studies have
shown that visualizations used in collaborative settings in-
crease efficiency of analysis, and provide greater levels of
insight[1].
This paper will investigate the benefits of information
visualization interfaces on multiple devices in collocated col-
laborative settings.
II. INFORMATION VISUALIZATION
When data is visualized interactively on a computer it
is known as “information visualization”. Card et. al. define
information visualisation as:
“the use of computer-supported, interactive, visual
representations of abstract data in order to amplify
cognition”[2]
This is not to be confused with scientific visualization,
where data is not abstract, and spatial representation is given.
A. Design Guidelines
Shneiderman devised design guidelines for designing infor-
mation visualization interfaces known as the “visual informa-
tion seeking mantra”[3]. The mantra states that visualization
follows the pattern of:
“Overview first, zoom and filter, then details-on-
demand”
Shneiderman discussed the seven data types used in visu-
alizations (1-dimensional, 2-dimensional, 3-dimensional, tem-
poral, multi-dimensional, tree, and network), and how each
imposes unique challenges leading to different visualization
techniques.
The mantra can be broadened into seven key tasks that
should performable in information visualization interfaces:
Overview: Provide an overview of the whole collection.
Users should be able to zoom and pan across the col-
lection and view details. Fisheye distortion can be used
to magnify an area, while providing a distorted overview
away from the centre of magnification, and context plus
focus view provides a magnified area of interest bounded
by the rest of the undistorted collection.
Zoom: Zoom in on items of interest. Zooming should be
smooth so users maintain their positional awareness
within the visualization.
Filter: filter out items. Dynamic queries can be used to
deselect unwanted items (see section IV for an early
implementation of this).
Details-on-demand: Get details of any items or groupings.
This should be simpler once items have been filtered from
the collection.
View Relationships: View relationships among items. For
example, view items with similar attributes.
History: Allow the user to undo and redo actions by keeping
a history of actions.
Extract: Allow users to extract query results in well known
formats, for use in email, printing, presenting, or statis-
tical analysis.
2. For multi-device collocated collaborative settings, these
seven tasks require extra consideration.
While an overview of the whole collection is suitable to a
large display such as a tabletop, it is impractical on smaller
devices. Here compromises need to be made by displaying
either smaller subsets of the collection, aggregated data or
“details-on-demand”.
The portability of mobile devices can be exploited by
allowing users to extract data to their devices, so query results
can be taken away. Additionally, as mobile devices are more
suited to single users, information displayed can be tailored to
suit the user’s requirements.
However, larger display acting as shared workspaces need
to account for actions such as panning, zooming and filtering.
For this reason shared control and management of the display
space needs to be deliberated. Interfaces should not allow one
user to disrupt the work of others. Conversely users should be
given appropriate means to manipulate visualizations.
One solution to this is to split the workspace into different
areas for each user, or to magnify areas of interaction akin to
context-plus-view. Also interfaces for large displays will need
to account for the viewing angle of multiple users, so text and
visualizations are orientated correctly for each user[4].
As well as this, brushing and linking are useful tools[5].
Brushing occurs when a user highlights a subset of data
causing extra detail to display (e.g. in a pop-up window).
Linking then results in this highlighted subset of data also
being highlighted in other visualizations within the view. This
is an example of “multiple coordinated views”, where one
highlighted item is selected is another view[6]. In a multi-
device setting, if an item is brushed on a large display, its
detail could be displayed on the user’s mobile device.
Furthermore alternative methods of interaction such as
touch, motion and speech will have different design im-
plications. For example speech control could be considered
awkward or socially unacceptable if multiple users wish to
interact concurrently[7]. The imprecision of motion control
means it is more suited to less precise interactions, yet it allows
multiple users to interact without interference. Contrariwise,
while touch has higher precision, users can obstruct each other
if they are working in close proximity on the display space.
B. A Reference Model for Information Visualization
Chi developed a reference model for information visualiza-
tion interfaces from data to visual form[8] (see Figure 1).
Fig. 1. A reference model for information visualization
Here raw data is transformed and given meaning, before
being applied to various visualizations, and finally displayed
in a view for the user. Actions the user takes will further
manipulate the model. For example, filtering items of data will
cause it to be transformed; manipulating visualizations will
lead to visualization transformation; and changing the view
position or scale will cause view transformation.
III. VISUALIZATIONS FOR HIGHER DIMENSIONS OF DATA
The recent increase in stored data, often means that tradi-
tional diagrams (such as bar and pie charts) are impractical
and the prove cluttered and difficult to read. For this reason
alternative forms of data representation and manipulation can
be used.
As the quantity of data and variables increase, the data is
said to be of a higher dimension. To represent this texture,
colour, and various labelling can be used. Numbering and
labelling is more suitable for categorical data, while colour
scales and shading is more suitable for ordinal data. For
example heat maps represent data both by the position on
x and y axes and represent ordinal data in terms of colour
around a point.
A. Visual Variables
Bertin characterised seven “visual variables” which can
be used represent data in visualizations (later revised by
Carpendale[9]).
They are position along an axis; size (area, volume, changes
in length); shape; value (light to dark); colour (changes in
hue); orientation (changes in alignment); texture (changes in
“grain”) and motion.
Mackinlay investigated the effectiveness and accuracy of
these different visual variables when visualizing quantitative,
ordinal and nominal data[10]. This can be used as a guideline
when designing visualizations for different data sets.
However including tasks of higher accuracy over those of
lower accuracy may not be the best approach. A visualisation
should still be able to inform the user of patterns and trends,
but if this can be expressed in a more accurate way, then this
is preferable[11].
IV. EARLY VISUALIZATIONS
Shneiderman developed a method called “dynamic queries”
for filtering and selecting items from large quantities of
data[12]. Here users could use sliders and buttons to produce
list views and visual representations such as colour coded
map views, and scatter plots to aid in cognitive understanding
of information. Testing showed these visualisations greatly
improved performance and user engagement was positive.
Shneiderman and Ahlberg developed the “starfield display”,
a visual search interface of a scatter diagram with zooming and
filtering features[13].
Johnson and Shneiderman developed a space filling “tree-
map” to represent hierarchical data while utilising all space
on screen[14]. Here larger rectangles represent the top of
hierarchies before branching into smaller subcategories. This
type of display is popular on devices with smaller displays,
where screen space needs to be fully utilized.
3. V. MOBILE VISUALIZATION
As mentioned previously, smartphones allow greater mobil-
ity, and data can be saved and accessed in varying locations.
They have large levels of functionality, and sight, sound and
touch are all common features. For example, tactile feedback
can be used to give users feedback from I/O events.
While mobile devices are capable of performing similar
tasks to desktop computers, their limited size makes human-
computer input and output more difficult[15]. When a device
is reduced in size, its method of input and output is simultane-
ously reduced, leading to crowded keyboards and interfaces.
For this reason, visualizations should be adapted for smaller
displays by using overviews and aggregations of data[16].
VI. LARGE DISPLAY VISUALIZATION
Large displays can display visualizations more clearly, and
allow better communication in person[17].
Table top displays can be advantageous in collaborative
work, as users can gather around the device, and interact
via touch. Researchers have developed prototype interfaces to
explore the use of table top displays for visual analytics[18],
collaborative design[19] and pair programming[20].
SourceVis allows multiple users to analyse visualizations
synchronously[4]. Visualizations can be viewed full screen, in
separate windows and at any angle parallel to the edge of the
display (see Figure 2).
Fig. 2. SourceVis allows multiple users to view different visualizations
concurrently[4]
VII. ALTERNATIVE DISPLAY AND INTERACTION DEVICES
Virtual reality head mounted displays such as Oculus Rift
and Playstation VR are becoming more commercially success-
ful. Motion detection devices such as Microsoft Kinect can
track multiple people in real time. Motion controllers such
as that on the Nintendo Wii, can track the movements of the
controller.
Future visualizations should take advantage of this growing
number of input and output devices. They can be used to
view larger quantities of data, and work collaboratively more
intuitively.
For instance CoffeeTable is a collaborative visualization
interface which uses Wii remotes and digital pens interactively
to assist with software development processes[21].
CodeSpace uses large shared touch screens, mobile touch
devices, and Kinect sensors to share information during de-
veloper meetings[7]. Mobile devices are used as pointers for
a large display allowing intuitive and accurate control.
VIII. COLLABORATIVE WORK
Empirical studies have shown that synchronous collabora-
tive visualizations have benefits in both sharing of knowledge,
and time efficiency of analysis.
Bresciani and Eppler conducted an empirical study where
analysts worked in groups to complete tasks with the help
of sub-optimal, optimal or no visualisations. It was found
that when analysts worked collaboratively using visualisations,
their efficiency and accuracy of results increased when com-
pared to those who worked without visualisations. It was also
noted that there was no significant difference between optimal
and sub-optimal visualisations and participants were unaware
of the advantages caused by collaborative visualisation[1].
Mark et al. developed a model of collaborative information
visualisation as can be seen in Figure 3[22].
They conducted a study where forty dyads performed data
analysis tasks using information visualisation software. The
following scenarios were compared:
• focused questions v free data discovery
• remote v collocated collaboration
• high v low transparency interface design (InfoZoom V
Spotfire)
Fig. 3. Model of the stages of information visualisation[22]
In general, data gathering followed this model. Initially
participants would break up the question into its constituent
parts. For example the question “how does wage vary by age?”
uses the variables “gender” and “age”. These variables are
then mapped to the software and an appropriate visualisation
is found and assessed.
The type of question posed created different visualisation
needs. When focused questions were given, the path followed
was deterministic so the correct visualisation needed to be
clear and easy to find. However free data discovery followed
an opportunistic process. In this case it was then beneficial
to have a clear view of all the visualisations, so the analysts
could focus on what was seen as the most interesting.
When remote and collocated working was compared, it was
seen that control of the visualisation was split differently. For
collocated working, one user generally had more control of the
interface while the other watched. However with remote work-
ing both users had their own terminal so explicit confirmation
was made before each activity.
4. Finally users of more transparent software with clearer
interfaces were found to complete tasks in shorter amounts
of time compared to those using less transparent software.
Past collaborative visualisations have implemented separate
views for individual user interaction, as well as views for
collaborative work allowing input from multiple users[23].
This allows input and control to be treated fairly among users.
IX. FUTURE CONTRIBUTIONS
To expand the knowledge in this area, we aim to develop
a prototype information visualization interface to facilitate
multi-device collocated collaboration.
Focus groups will be held to explore attitudes towards dif-
ferent input methods, and paper prototypes could be developed
and assessed before a fully working prototype is implemented.
A large display such as a table top or monitor will be used
as a shared display showing an overview of the data. The
interface will need to be accessible, so use of widely available
devices such as smart-phones and tablets as additional displays
would be advantageous.
To enable intuitive and natural use of the interface, interac-
tion techniques will need to adhere to existing expectations,
or be clearly explained on the interface. Further user studies
could be conducted to assess these expectations further.
X. CONCLUSION
The fast development and uptake of new technology has
lead to new opportunities for visualization interfaces. What
was previously technologically infeasible can now be produced
using off the shelf products. Visualizations should exploit
this wealth of new technologies, which enable more user
interaction and engagement than ever before.
The benefits of collaborative work have been proven in
past studies, making the investigation into multi-device col-
laboration a rewarding area. The strong uptake of mobile
devices, is a key opportunity to allow visualizations greater
levels of mobility. In a multi-device setting, mobile devices
can be used as individual views, while larger displays are
used more collaboratively. Interfaces should be designed to
take advantage of this increased mobility, which allows results
of group analysis to be kept on the device.
Prototype interfaces need to be developed to further test
the possibilities and potential hurdles caused by multi-device
collocated collaboration. User studies need to be completed
throughout this process to aid in the design and development
of such prototypes.
REFERENCES
[1] S. Bresciani and M. J. Eppler, “The benefits of synchronous collaborative
information visualization: Evidence from an experimental evaluation,”
IEEE transactions on visualization and computer graphics, vol. 15,
no. 6, pp. 1073–1080, 2009.
[2] Readings in Information Visualization: Using Vision to Think. San
Francisco, Calif: Morgan Kaufmann, 1 edition ed., Feb. 1999.
[3] B. Shneiderman, “The Eyes Have It: A Task by Data Type Taxonomy
for Information Visualizations,” in Proceedings of the 1996 IEEE
Symposium on Visual Languages, VL ’96, (Washington, DC, USA),
pp. 336–, IEEE Computer Society, 1996.
[4] C. Anslow, S. Marshall, J. Noble, and R. Biddle, “SourceVis: Collabo-
rative software visualization for co-located environments,” in 2013 First
IEEE Working Conference on Software Visualization (VISSOFT), pp. 1–
10, Sept. 2013.
[5] A. Buja, J. A. McDonald, J. Michalak, and W. Stuetzle, “Interactive
data visualization using focusing and linking,” in , IEEE Conference on
Visualization, 1991. Visualization ’91, Proceedings, pp. 156–163, 419,
Oct. 1991.
[6] M. Q. Wang Baldonado, A. Woodruff, and A. Kuchinsky, “Guidelines
for Using Multiple Views in Information Visualization,” in Proceedings
of the Working Conference on Advanced Visual Interfaces, AVI ’00,
(New York, NY, USA), pp. 110–119, ACM, 2000.
[7] A. Bragdon, R. DeLine, K. Hinckley, and M. R. Morris, “Code Space:
Touch + Air Gesture Hybrid Interactions for Supporting Developer
Meetings,” in Proceedings of the ACM International Conference on
Interactive Tabletops and Surfaces, ITS ’11, (New York, NY, USA),
pp. 212–221, ACM, 2011.
[8] E. H. Chi, “A taxonomy of visualization techniques using the data state
reference model,” in IEEE Symposium on Information Visualization,
2000. InfoVis 2000, pp. 69–75, 2000.
[9] M. S. T. Carpendale, “Considering Visual Variables as a Basis for
Information Visualisation,” Jan. 2003.
[10] J. Mackinlay, “Applying a Theory of Graphical Presentation to the
Graphic Design of User Interfaces,” in Proceedings of the 1st Annual
ACM SIGGRAPH Symposium on User Interface Software, UIST ’88,
(New York, NY, USA), pp. 179–189, ACM, 1988.
[11] W. S. Cleveland and R. McGill, “Graphical Perception: Theory, Experi-
mentation, and Application to the Development of Graphical Methods,”
Journal of the American Statistical Association, vol. 79, pp. 531–554,
Sept. 1984.
[12] B. Shneiderman, “Dynamic queries for visual information seeking,”
IEEE Software, vol. 11, pp. 70–77, Nov. 1994.
[13] C. Ahlberg and B. Shneiderman, “Visual Information Seeking: Tight
Coupling of Dynamic Query Filters with Starfield Displays,” in Pro-
ceedings of the SIGCHI Conference on Human Factors in Computing
Systems, CHI ’94, (New York, NY, USA), pp. 313–317, ACM, 1994.
[14] “IEEE Xplore Abstract - Tree-maps: a space-filling approach to the
visualization of hierarchical information structures.”
[15] C. Harrison, “Appropriated Interaction Surfaces,” Computer, vol. 43,
no. 6, pp. 86–89, 2010.
[16] L. Chittaro, “Visualizing information on mobile devices,” Computer,
vol. 39, pp. 40–45, Mar. 2006.
[17] P. Isenberg, D. Fisher, S. A. Paul, M. R. Morris, K. Inkpen, and
M. Czerwinski, “Co-located collaborative visual analytics around a
tabletop display,” IEEE Transactions on visualization and Computer
Graphics, vol. 18, no. 5, pp. 689–702, 2012.
[18] P. Isenberg, D. Fisher, M. R. Morris, K. Inkpen, and M. Czerwinski, “An
Exploratory Study of Co-located Collaborative Visual Analytics around
a Tabletop Display,” Microsoft Research, Nov. 2010.
[19] S. D. Scott, M. S. T. Carpendale, and K. M. Inkpen, “Territoriality
in Collaborative Tabletop Workspaces,” in Proceedings of the 2004
ACM Conference on Computer Supported Cooperative Work, CSCW
’04, (New York, NY, USA), pp. 294–303, ACM, 2004.
[20] A. Soro, S. A. Iacolina, R. Scateni, and S. Uras, “Evaluation of
User Gestures in Multi-touch Interaction: A Case Study in Pair-
programming,” in Proceedings of the 13th International Conference on
Multimodal Interfaces, ICMI ’11, (New York, NY, USA), pp. 161–168,
ACM, 2011.
[21] J. Hardy, C. Bull, G. Kotonya, and J. Whittle, “Digitally annexing desk
space for software development: NIER track,” in 2011 33rd International
Conference on Software Engineering (ICSE), pp. 812–815, May 2011.
[22] G. Mark, K. Carpenter, and A. Kobsa, “A model of synchronous collab-
orative information visualization,” in Seventh International Conference
on Information Visualization, 2003. IV 2003. Proceedings, pp. 373–381,
July 2003.
[23] C. Anslow, “Collaborative Visualization: Definition, Challenges and
Research Agenda,” Jan. 2012.