Description of a multi-touch multi-user application developed for Microsoft Surface 2.0 which allows to browse exhibitions and masterpieces of the Nippon Museum of Lugano in an easy-to-use and
Description of a multi-touch multi-user application developed for Microsoft Surface 2.0 which allows to browse exhibitions and masterpieces of the Nippon Museum of Lugano in an easy-to-use and intuitive way.
POLITECNICO DI MILANO
Dipartimento di Elettronica e Informazione
HUMAN COMPUTER INTERACTION
Lucio Bordonaro, Silvia Zicca, Andres Fernando Arroyave Yepes
Academic Year 2011/2012
Multitouch technologies are evolving quite fast, not only for single-user applications, but also for cooper-
ative applications which allows users to interact with each others creating an engaging environment and
achieving tasks together. It is quite easy to learn how to use this kind of new technology even for people
that are not used at all and it allow to explore a new way of interaction that is more natural with respect
to the one of the personal computer, for example.
And this is why we developed a multitouch cooperative multi-user application which allows to navigate
the contents of a museum presented in an innovative way and that can be easily used also by children
and old people who want to know information about the exhibitions presented in the museum.
NipponLugano Multitouch is an application entirely dedicated to the Nippon Museum of Lugano, in
Switzerland. In this museum there are four exhibitions related to Japanese photographers. That mu-
seum proposes some informative contents in its website in an hypertextual and multimedial way to allow
guests to deepen the cultural context to which it is dedicated to, either before or after the visit.
The aim of the application is to repropose these contents in an innovative way towards a more natural
interaction exploiting the new technology offered by Microsoft Surface 2.0, a multitouch surface of the
last generation which allows, among other things, to perform cooperative tasks between users and to
recognize the direction of the touches.
The idea is to put the Surface with our application, inside the museum, near the entrance, so that po-
tential guests of every age are tempted to buy a ticket after viewing a preview of the contents presented
in the museum. This is because it is very easy to understand how to interact with the Surface so even a
person who does not know how to use a computer, or a child can use it.
This application can be used either in a cooperative way (an user can pass some contents to another user
to show it to him/her and discuss together) or in a non cooperative way (in case, for example, the Surface
is used by people who do not know each other or do not need to exchange information).
Given the dimensions of the surface, the idea is to have users only along the long edges of the Surface.
This application will have only one kind of users: the potential guests of the museum. We assume that
they received the practical information (address, opening hours and so on) through the website and they
are just arrived to the museum. We can suppose that this big group is heterogeneous and composed
by different kind of users (children in a school trip, old people, professionals photographers and also
common people with no particular characteristics), but they have the same requirements and needs.
In particular, considering the needs of every single independent user, he/she should receive the answer
to some questions like:
• what are the exhibitions of this museum?
• what is every exhibition about?
• what are the themes faced by each exhibition?
• what are the photos related to a speciﬁed theme of the exhibition?
• what are the details of a selected photo?
For each one of the answers to the questions above, there will be an answer in one screen of our applica-
tion which is structured to follow the questions that the user may have in order.
Besides these needs, there is also the possibility to use the Surface cooperatively. So every content can
be rotated and passed to another user that is placed on the other side of the Surface to discuss it together
and showing it to other users.
NOTE: In this documents we use the words “screens” and “windows” as a synonymous.
3.1 Gestures design
Microsoft Surface 2.0 supports a huge variety of possible gestures, so we had to select which ones to use
in our application.
Here we report the gestures used to interact for the Surface:
• Move the windows: One or more ﬁngers on a window to move or ﬂick it.
Figure 1: Move the window
• Rotate the windows: there are three ways to rotate a window:
– Single ﬁnger rotate: One ﬁnger touches a window and drags it around in a circle so that it
Figure 2: Window rotation (1)
– Two-ﬁnger rotate: Two or more ﬁngers on a window are dragged in opposite directions along
Figure 3: Window rotation (2)
– Pin turn: One ﬁnger remains stationary acting as a pivot point while other ﬁngers move
Figure 4: Window rotation (3)
• Screen simple touch: a simple tap on the screen with a ﬁnger.
Figure 5: Screen simple touch
• Zoom in and out: Two or more ﬁngers on an item are dragged apart (used to resize the gallery
3.2 Screens design
3.2.1 Main screen
The main screen consists into a red circle on a white background which represents the Japanese ﬂag
(because the museum is related to Japanese photographers). The red circle in the middle is interactive
and it is split into four slices that represents the different exhibitions (identiﬁed ). The users can rotate
it around its center and select the desired exhibition tapping on the slices. The use of a circle allows the
users to read it easily independently from his own position with respect to the surface.
Figure 7: Main screen
3.2.2 Exhibition screen
Once the user selected an exhibition, it will appear a screen with all the details related to that exhibi-
tion (title, author, description and some random photos). The window can be rotated so that it can be
“passed” to other users that want to read it, it can be moved wherever on the surface, it can be zoomed
in and out, it can be closed or can allow to open the themes screen to allow to read more about the main
themes of the selected exhibition.
Figure 8: Exhibition screen
3.2.3 Themes screen
The themes screen will show information about the themes of the selected exhibition (title, description
and one representative image). This window has a circular menu. Rotating it, it is possible to view in-
formation about the different themes. Tapping on the representative image, the gallery of the theme will
open. This screen can be moved and it is possible to show or hide the description of each theme.
Figure 9: Themes screen
3.2.4 Gallery screen
Every theme has a gallery of images. This gallery window can be enlarged or reduced. Also the images
contained into the window can be zoomed in and out and is possible to scroll horizontally and vertically
to view images not directly showed when the screen is too small.
Figure 10: Gallery screen
3.2.5 Image details screen
Every time you tap an image from a gallery or the exhibition library bar, it will open in an independent
screen with related details (title, author and description). It is possible for the user to pass the image to
another user without passing him/her the whole gallery of that theme. It is also possible to move, rotate
and zoom the image. Text can be showed or hidden.
Figure 11: Image details screen
3.3 States diagram
The following diagram shows all the possible interactions that can be done with each window and how
to pass from one to another. Consider also that every window can be closed by the user in every moment
without inﬂuencing the behavior of the other. Also they can be opened more than ones simultaneously
by the two side of the surface.
Figure 12: State diagram
3.4.1 A relevant non-cooperative scenario
Paolo wants to visit the Nippon Museum following the advice of a friend who has already gone there and
who has talked him vaguely what it is about. When he arrives to the museum, he has never visited the
website, so he knows only the practical information received by his friend (opening hours, address and
ticket price), so he decides to use the Surface to gain more detailed information about the contents of the
museum before buying the ticket.
From the main screen of the application, Paolo decides to retrieve information about the Araki’s exhibi-
Figure 13: Araki exhibition
After he read about that exhibition and slided some of the images it contains, he decides see what are the
themes faced in that exhibition.
Figure 14: Araki themes
Chosen the theme Artistic Expression, Paolo decides to browse its gallery to have an idea about what
photos he will ﬁnd about that theme.
Figure 15: Araki theme: Artistic Expression
While zooming the gallery, Paolo is impressed by a photo, so he decides to zoom it in and to read some
additional information about it.
Figure 16: Araki photo details
Then, Paolo decides to close all the screens and to buy a ticket of the museum to visit it.
3.4.2 A relevant cooperative scenario
Marta and Sara are two sisters fond of photography who decide to visit together the Nippon Museum of
Lugano. They walk towards the Surface and they place along the two long sides of the Surface and start
browsing information about two different exhibitions: Araki (Sara) and Shunga (Marta).
Figure 17: Multiuser exhibition
While Marta is still reading, Sara opens the themes screen of Araki exhibition, she select the one she
prefers opening the gallery.
Figure 18: Multi-user
Marta closes the Shunga screen and decides to view information about Ineffable Perfection, meanwhile
Sara opens and zooms in a photo that she likes.
Figure 19: Opening a new exhibition
Sara passes the zoomed photo to her sister to listen to her opinion.
Figure 20: Passing content to another user
They are both interested into that particular photo, so they decide to go to buy the tickets to watch it.
4 Prototype Implementation
4.1 Paper prototype
After deciding which are the main elements of the application, we built up a paper prototype to simulate
the behavior of each component and analyzing if the user requirements were respected. As the term
“prototype” says itself, the deﬁnitive version of the application presents some differences.
Here follows some photos about the paper prototype.
Figure 21: Exhibition selector
Figure 22: Exhibition screen
4.2 Hardware and software architecture
The following ﬁgure represents the entire Surface platform, including the hardware layer and software
components that you must be aware of when you develop Surface applications.
Figure 26: Software architecture
4.2.1 Windows 7
Microsoft Surface 2.0 runs on the Windows 7 64-bit operating system. Windows 7 provides all the ad-
ministrative, security, and directory functionality of the Surface. Developers and administrators working
with a Microsoft Surface unit have full access to Windows functionality (in Windows Mode). However,
when people interact with Microsoft Surface applications, the Windows user interface is suppressed (in
4.2.2 Vision system
The Vision System uses PixelSense™ to process captured touch data into useful application data you can
access through Surface SDK APIs. PixelSense™ enables each pixel in the Surface display to detect when a
person touches it or when someone moves a ﬁnger, tagged object, or untagged object over it. It does this
without the use of cameras, which is what makes newer Surface hardware much thinner than previous
4.2.3 Presentation layer
The Presentation layer integrates with Windows Presentation Foundation (WPF) and includes a suite
of interaction controls designed for Microsoft Surface enabling you to quickly and easily build touch
4.2.4 Core layer
The Core layer exposes Microsoft Surface speciﬁc contact data and events so you can create Microsoft
Surface enabled applications with any user interface (UI) framework that is based on HWND (Handle
4.2.5 Windows integration
The tight integration between Microsoft Surface and the Windows operating system provides system
wide functionality on top of the Windows operating system. You must use this functionality to support
unique aspects of the Microsoft Surface experience, such as managing user sessions, switching between
the standard Windows user interface (Windows Mode) and the deployment experience (Surface mode),
monitoring critical Microsoft Surface processes, and handling critical failures.
4.2.6 Surface shell
Surface Shell is the component that manages applications, windows, orientation, and user sessions and
provides other functionality. Every Microsoft Surface application must integrate with Surface Shell.
4.3 Technical requirements
To install and develop applications using the Microsoft Surface 2.0 SDK, you must you have the following
software installed on your Windows 7 PC workstation, Windows 7 touch PC, or Surface unit that you are
using for development:
• Windows 7 operating system (64-bit recommended)
• .NET Framework 4.0
• Visual Studio 2010 or Visual C# 2010
• XNA 4.0 Redistributable or XNA Games Studio 4.0
• Microsoft Expression Blend 4.0 (recommended)
Additionally, you must create your applications to run on hardware designed for Surface 2.0, such as the
Samsung SUR40 for Microsoft Surface, or Windows 7 touch PCs.
4.4 Actual state of the implementation and open issues
At the moment the application responds to all the user’s requirements speciﬁed in Chapter 2, but it is
only a prototype that needs to be tested with real users. Also there are still some open issues to ﬁx.
First of all, the gallery of each exhibition is made to contain nine images related to the exhibition. The
number of nine was chosen because it is simple to dispose images in a 3x3 matrix and put in the middle
of the square the selected image, so that the user can navigate all the ﬁgures starting from the chosen
one. Also it is not a big number, so it does not overload memory and slow down the program execution.
We decided to show different images from the same exhibition and the simplest way to do this was to
chose one image for each theme of the exhibition. The problem is that some exhibitions do not have
nine themes (but they have eight) so there can be some images repeated more than once.
The same problem is found when opening the gallery of a speciﬁed theme of the chosen exhibition: some
themes do not have nine images, so there are duplicates.
We analyzed the problem and one solution was to leave some empty spaces, but we decided that put
duplicates was a better solution, because empty spaces could have confused the users. Another possibil-
ity was to completely modify the structure of the gallery, but doing this we should have changed all the
possible interactions and GUI related to that screen.
Another issue consists in the fact that in the actual state of the implementation we have not set a max-
imum number of screens that can be opened on the Surface simultaneously. Probably it is not a real
problem since the Surface automatically put the last touched window over the others.
Also we have not set the maximum dimensions that every screen can have.
4.5 Technical problems
The main problem we had to face during the implementation part of the project, is that Microsoft Surface
2.0 was not on the market at the moment in which we developed the application, so we had to test it on
normal laptop with a screen resolution which is not like the one of the real surface (that is full HD). Also
we had to use a touch simulator to simulate multitouch actions and other kind of interactions that are
not easily created using mouse movements.
Another minor problem was that some guidelines ﬁnd on the Surface ofﬁcial website for developer, are
too strict regards some aspects (i.e. the font size).
The project NipponLugano Multitouch is a useful application related to Nippon Museum of Lugano, but
it can be easily modiﬁed and adapted to every kind of museum of exhibition losing only the metaphor of
the Japanese ﬂag in the main screen.
The main thing we learned is to work in contact with new technologies and in particular with multitouch
world which offers a huge number of interactions and possibilities that you cannot ﬁnd in classical com-
Also the Surface allows the simultaneous interaction of more than one users on the same device and this
cannot be provided by normal small devices with touch screen (i.e. smartphones, tablets...).
Working on this kind of technology, means to think in a totally different way about the human-computer
interaction and focus on the more natural interactions and gestures to allow the users to focus themselves
on the contents and not on what they have to do to ﬁnd and manipulate it.