SlideShare a Scribd company logo
1 of 81
POLITECNICO DI TORINO
Collegio di Ingegneria Informatica, del Cinema e
Meccatronica
Master of Science in Computer Engineering
Master Degree Thesis
Enhancing video game experience
through a vibrotactile floor
Supervisor
Prof. Andrea Giuseppe Bottino
Candidate
Nicola Gallo
Student no. 206830
External Supervisor
McGill University - Shared Reality Lab
Prof. Jeremy R. Cooperstock
March 2016
Dedicated to my
Parents
Abstract
When it comes to Virtual Reality, the whole idea is to initiate the feeling of Pres-
ence, the perception of actually being within a virtual world. When discrepancies
happen between what your brain expects and what it actually feels, this feeling can
be broken. This generates a sense of disappointment along with a sensation of being
disassociated from the virtual environment. In order to deceive your mind and give
it the illusion that your body is somewhere different than what your eyes are seeing,
all five human senses should perceive the digital environment to be physically real.
While tricking the sense of smell and taste is not so common in the video game
world, the sense of touch has been attracting the attention of a growing number
of companies all around the world. However, when the pedestrian movement is in-
volved, it is not so clear how to provide a compelling haptic feedback as you would
expect to receive it directly under your feet.
This project aims to solve this problem by taking advantage of the potentialities
offered by a vibrotactile floor. Two VR experiences have been developed: Infinite
Runner, in which the floor was employed for the generation of particular haptic
effects as a response to specific game events, and Wakaduck, in which it was tried
to use the haptic feedback not only to enhance the user experience, but also and
above all to provide some haptic cues whose understanding is essential to correctly
play the game.
iii
Acknowledgements
First and foremost, I would like to thank my Thesis Supervisor Prof. Bottino, for
his great patience and inestimable remote support, showing a deep interest in the
research topic carried out.
My most truthful gratitude goes to Prof. Cooperstock, my Supervisor at McGill
University, for allowing me to be part of his Research Group and assisting me
throughout the entire thesis work, and for deeply believing in my potential. My
deepest gratitude goes also to my friend and colleague Naoto Hieda, without whose
constant support it would have been impossible for me to achieve my research ob-
jectives. A special thanks goes to all my mates at Shared Reality Lab, that made
my months in Montreal unforgettable.
Finally, I would like to warmly thank my family and all my closest friends for the
immense support provided also and especially in the most difficult times. Your love
has been and will always be a landmark for me.
NG
iv
Contents
Abstract iii
List of Figures vii
1 Introduction 1
1.1 Thesis Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Shared Reality Lab . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2 Literature Review 4
2.1 Immersive Displays . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.2 Locomotion in Immersive Enrivonment . . . . . . . . . . . . . . . . . 6
2.3 Haptic Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.4 Video Game and Immersion . . . . . . . . . . . . . . . . . . . . . . . 7
3 Infinite Runner 9
3.1 System Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.1.1 Haptic floor architecture . . . . . . . . . . . . . . . . . . . . . 10
3.1.2 Motion capture architecture . . . . . . . . . . . . . . . . . . . 14
3.2 Game Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.2.1 How to play . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.2.2 Motivations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.3 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.3.1 Frustum Update . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.3.2 Character Movement . . . . . . . . . . . . . . . . . . . . . . . 19
3.3.3 Jump Detection . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.3.4 Slide down detection . . . . . . . . . . . . . . . . . . . . . . . 26
3.3.5 Haptic feedback . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.4 Experiments and Results . . . . . . . . . . . . . . . . . . . . . . . . . 32
3.4.1 Methology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.4.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
v
4 MINIW 39
4.1 System Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
4.2 Magic Tiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
4.2.1 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4.3 Wakaduck . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.3.1 How To Play . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.3.2 Motivations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
4.3.3 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Server Side . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Unity Side . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
4.3.4 Game features analysis . . . . . . . . . . . . . . . . . . . . . . 55
4.4 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
5 Conclusions and Future Work 59
5.1 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
5.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
A User Testing Documents 63
B Acronyms 69
Bibliography 70
vi
List of Figures
3.1 High level architecture diagram of the system . . . . . . . . . . . . . 9
3.2 Haptic floor architecture diagram . . . . . . . . . . . . . . . . . . . . 11
3.3 Motion capture system architecture . . . . . . . . . . . . . . . . . . . 14
3.4 Headset with reflective markers on it used to track the user movements 15
3.5 In-game snapshots of Infinity Runner . . . . . . . . . . . . . . . . . . 16
3.6 Camera frustums (left) and rendered scenes (right). The rendered
scenes are montages of 4 cameras: left, front, right and floor. On
the top row, the child node (i.e., the eye position) is located, and
thus the vanishing points on the rendered scenes are in the center of
each images. On the bottom row, contrarily, the vanishing points are
shifted due to the asymmetry of the frustums. . . . . . . . . . . . . . 18
3.7 Haptic floor (left) and virtual floor (right), showing the range of val-
ues that can be assumed by them. It was only taken into account
the x-axis as the player has control only over the virtual character
movements along this axis. . . . . . . . . . . . . . . . . . . . . . . . . 19
3.8 Sequence of actions in a standing vertical jump . . . . . . . . . . . . 21
3.9 Vertical jump as seen by the motion cameras and the FSR sensors . . 21
3.10 Captured data of a participant running and jumping around the hap-
tic floor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.11 Flowchart showing the main operations executed by the server (left
side) and Unity (right side) to correctly detect a jump . . . . . . . . . 25
3.12 Sequence of actions in a squat movement . . . . . . . . . . . . . . . . 26
3.13 Squat movement as seen by the motion cameras and the FSRs . . . . 26
3.14 Capture data of a participant executing different movements on the
haptic floor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.15 Average collected coins and hit oblstacle rates for each participant,
divided for haptic and audio sessions . . . . . . . . . . . . . . . . . . 35
3.16 Results of the post-session questionnaire for Group#1 (top) and Group#2
(down) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.17 Results of the post-test questionnaire . . . . . . . . . . . . . . . . . . 37
4.1 System architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
4.2 MINIW . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
vii
4.3 Electrodes & Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4.4 In-Game screenshot of Wakaduck . . . . . . . . . . . . . . . . . . . . 47
4.5 How to stand on MINIW while playing Wakaduck . . . . . . . . . . . 47
4.6 Force bars used when playing Wakaduck . . . . . . . . . . . . . . . . 48
4.7 Sensors position within the tiles . . . . . . . . . . . . . . . . . . . . . 50
4.8 Server code flowchart . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.9 Perspective view of the game field . . . . . . . . . . . . . . . . . . . . 55
4.10 Photo examples taken during MINIW demonstration at TEDxMon-
treal & Maker Faire Ottawa . . . . . . . . . . . . . . . . . . . . . . . 57
4.11 Plot of participants against number of hit ducks . . . . . . . . . . . . 58
viii
Chapter 1
Introduction
Imagine to play a first-person shooter (FPS) using special components to create a
sense of immersion in the virtual reality (VR) world. You can look around and
see all the environment surrounding you, with your allies fleeing for their lives from
hostile fire. And like them, you also are able to run away in order to search for a safe
hiding place. As your feet make contact with the ground, you can hear the gravel
creaking under your weight; not only, you can also feel it under your feet, as you
were really there. Suddenly, a grenade explodes, and you sense the ground vibrate
while the debris hit you on the back.
In recent years, we have witnessed the emergence of a growing number of in-
creasingly sophisticated systems able to provide compelling graphical and auditory
effects related to the interaction with a VR environment. Exploiting the use of these
devices, Yoon et al. [1] designed a game interface able to augment the user’s level of
immersion for the Unreal®
Tournament 2004 FPS. The system in question is com-
posed of a head-mounted display (HMD) used for showing the visual information,
a 5.1 channel headphone for the auditory information, a head tracker and a data
gloves for making the interaction with the virtuarl world more natural than the one
obtained by playing the video game with a computer (the latter component is used
to recognize the user’s hand gestures). Lugrin et al. [2] did a similar experiment,
developing an immersive stereoscopic experience through a four-screen CAVE-like
installation of an already-existing commercial computer FPS. Both experiments had
the aim to compare the desktop version of the chosen video game with its immer-
sive counterpart, and both of them showed that the users have much preferred the
second version.
The two examples just cited (like so many other similar projects) have focused
on researching about which was the best way to provide the user with a visual and
auditory immersive experience, and on how to interpret the player’s body movements
through motion sensors as perceived input and turns it into useful controls. However,
in order to create a sense of full immersion, all five human senses (vision, hearing,
touch, smell and taste) should perceive the digital environment to be physically
1
1 – Introduction
real. While stimulating the senses of smell and taste is not so common in the video
gaming world, the use of the sense of touch has been widely employed in a variety
of video games since game controllers with embedded vibration actuators became
available. As Burdea stated, the “haptic feedback is a crucial sensorial modality
in virtual reality interactions“ [3], that can be effectively employed to enhance the
experience of events happening on the screen. For example, when playing a racing
car game, a haptic feedback could be generated to alert the player whenever the car
collides against an obstacle (i.e., a wall or another car).
Certainly, game controllers with vibration feedback can offer some degree of
feeling within the video games, but the rumbles on your hands hardly count as
being immersive or lifelike. In order to fill this lack of availability of immersive haptic
devices, recently a multitude of new haptic interfaces has appeared on the market,
capable of delivering engaging sensory stimuli to different parts of the human body
and, most importantly, at a reasonable cost. For example, the KOR-FX1
Haptic
Gaming Vest is able to convert the sound coming from the video game (or any
other audio source you are playing) into haptic feedback, creating a subwoofer-like
vibration in proportion to its strength. The Gloveone2
, instead, is a virtual reality
glove able to provide haptic feedback that is felt by the user through his hand and
fingers.
However, when the pedestrian movement is involved, the user may complain that
receiving a haptic feedback on the hands or on the sternum does not feel realistic
at all, causing a lowering in the sense of immersion in the virtual world. In order
to be able to feel the terrain on which he is walking, the user should be provided
with a haptic feedback underneath his feet, just as it happens in the real world.
This result can be effectively achieved exploiting the potentiality offered by the
haptic floor (also known as vibrotactile floor), designed and built by Visell et al. [4].
This is a special surface that can simulate the feel of walking on different ground
material, such as snow, grass or pebbles. It consists of a matrix of square tiles,
each of which has a linear motor actuator bolted to its underside; moreover, they
come with a force sensing resistor (FSR) embedded on every corner (i.e., each tile
has four different FSR); the signals generated by the sensors are conditioned and
subsequently digitized by a microprocessor board, that transmits via serial data link
the force data to a computer running a software simulation written using Max/MSP
visual programming language. The simulation generates independent audio signals
for each tile, which are used to drive each corresponding actuator via an audio
amplifier.
The thesis’ primary focus is to understand how the haptic floor could be employed
to consistently enhance the players’ experience or their performance in gameplay.
1
http://korfx.com/
2
https://www.gloveonevr.com/
2
1 – Introduction
It is unknown whether haptic feedback is more effective if delivered to a body part
that would normally experience such feedback in real-world conditions, e.g., to the
feet vs. the hands if the interaction involves pedestrian movement. Gaining a better
understanding of these issues will allow for improved game design and simulation of
immersive, multimodal virtual reality experiences.
The thesis’ secondary focus will be to investigate whether the haptic floor could
also be used as an input device and not simply as an output one. The idea is that
the players can be provided with some haptic cues instead of visual ones, that would
permit them to understand the status of the video game so that they can change
their actions if necessary.
1.1 Thesis Outline
The remainder of the thesis is organized as follows. Preceding research on immersive
environments and haptic technologies is reviewed in Chapter 2. In Chapter 3, a
complete description of an immersive experience developed exploiting the potential
offered by the vibrotactile floor is presented. Also, Chapter 3 presents the results
obtained from an experiment designed to explore the role that the haptic feedback
generated by the haptic floor can play in enhancing a player’s experience while
playing a video game. In Chapter 4, the experiences developed using MINIW, a
2×2 haptic floor platform, is described. Finally, conclusions and possible future
works and enhancements are presented in Chapter 5.
1.2 Shared Reality Lab
The thesis project described in this document took place within the Shared Reality
Lab3
, a facility part of the Centre for Intelligent Machines (CIM) research group at
McGill University, Montreal (Canada).
The lab is broadly concerned with human-computer interaction technologies, em-
phasizing multimodal sensory augmentation for communication in both co-present
and distributed contexts. The research carried out by the members tackles the full
pipeline of sensory input, analysis, encoding, data distribution, and rendering, as
well as interaction capabilities and quality of user experience. Applications of these
efforts include distributed training of medical and music students, augmented en-
vironmental awareness for the blind community, treatment of lazy eye syndrome,
low-latency uncompressed HD videoconferencing and a variety of multimodal im-
mersive simulation experiences.
3
http://srl.mcgill.ca/
3
Chapter 2
Literature Review
The goal of this background chapter is to provide an introduction on the major tech-
nological breakthroughs that have been made in the field of virtual reality applied
to gaming. These previous research efforts may be categorized into three distinct
hardware groups: immersive displays, locomotion systems in immersive environment
and haptic devices. Finally, it also provided an investigation on research aimed to
define techniques for analyzing the video games’ immersion level.
2.1 Immersive Displays
The history of virtual reality has its origins in the inventions of Morton Heilig, who in
1962 patented the Sensorama [5], a cabin with stereoscopic screens, stereo speakers
and a moveable chair. This device involves several human senses: allows the user to
watch through a stereoscopic viewer real images shot using two cameras, provides
tactile feedback generating vibrations in the seat and the handlebars, uses a hair
dryer to simulate the wind at different speeds and, finally, generates an olfactory
feedback.
In 1968 Ivan Sutherland created the first HMD called The Sword of Damocles [6].
The system consisted of two monitors (one for each eye) mounted on a device an-
chored to the ceiling and fastened on the user’s head. It was capable of tracking the
head position, whose movements were sent to a computer for generating the proper
perspective (of a wireframe cube), giving a primitive illusion of being in a virtual
world.
Following these milestones, VR has increasingly been used in the gaming field
to provide players with the most immersive experience possible. In the early 1990s
a company called Virtuality Group introduced VR to arcade video games. This
result was achieved by employing the Virtuality cabinets [7], huge oversized units
where players stepped in, placed virtual goggles over their heads and put themselves
in a three-dimensional gaming world. The game unit was provided with several
4
2 – Literature Review
games, including some of the most famous arcade games at that time like Pac-
Man and Legend Quest. In 1998 the company developed a consumer VR display in
partnership with Philips Electronics, but it didn’t have much success.
In 1993 Sega announced the Sega VR [8] headset for the Sega Genesis console.
The headset was equipped with liquid-crystal displays (LCDs) in the visor, stereo
sound and some tracking sensors so as to track the user’s head movements. How-
ever, due to technical development difficulties the device remained forever in a pro-
totype phase. An artistic application of such head-mounted displays is Osmose [9]
developed by Davies and Harrison in 1996. The virtual environment consisted of
semi-transparent image layers to generate nature or cyber scenes. The position of
the first person was controlled by a respiration sensor and a weight tracker. By
breathing faster or slower, the user could move up or down, respectively, while the
weight controlled the horizontal movement; this system took inspiration from diving.
It is only with the beginning of the new millennium that VR has begun to get a
great appeal, mainly due to the costs reduction that has allowed the general public
to have access to previously inaccessible technologies. The most famous device that
will be released in the early months of 2016 is the Oculus Rift [10], that has had the
honor to once again bring the world’s attention back on VR. Thanks to this device
the dream of most kids grown up in the ’90s will come true, i.e., the desire of having
a wearable viewer capable of making us virtually explore any location, immersing
us in virtual worlds created by a large number of developers. Born as a product
merely intended for a gaming audience, with its acquisition by Facebook Oculus
has become much more than a mere accessory for gamers; one of the most exciting
and fascinating non-gaming application of VR includes the work done by Gerardi et
al. [11], that developed Virtual Reality Exposure Therapy to assist veterans in the
treatment of Posttraumatic stress disorder (PTSD) by reconstructing events in a safe
virtual environment controlled by the patients. The system was initially developed
using an Emagin Z-800 3D visor, but researchers seek to incorporate the Oculus
Rift once the final version will be available so as to include scenarios specifically for
military sexual trauma, with the idea of not to create sexual assaults, but instead
the context in which they occurred.
Since strictly correlated to the work shown in this document, it is worth mention
the research carried out by Dan Sandin, who at ACM SIGGRAPH ‘93 demonstrated
what was called a CAVE [12]. This is a system devoted to create an immersive
experiece by surrounding a user with four projection screens on the left, center, right
and floor. Although LCDs are not cost-effective to surround a user, projection can
change the screen size by adjusting the distance. There are dome types as well, for
which distortion has to be taken into account in graphics rendering [13]. Nonetheless,
these platforms require a dedicated space. If a room has white walls, they can be
used as screens even they are not flat, square surfaces. To do so, perspectives must
be corrected by obtaining their geometry relative to the projectors. To acquire the
5
2 – Literature Review
geometry, fiducials and/or structured light can be used together with a camera.
However, there is a challenge that a standard camera lens cannot capture all the
walls at the same time. Garcia-Dorado et al. [14] used a mechanically controlled
camera to orient the angle to face each wall to solve this problem with a single
camera. Hashimoto et al. [15] proposed a system with a fish-eye lens to capture the
entire surface. Recently, Jones et al. [16] developed RoomAlive, which uses several
pairs of depth cameras and a projector to acquire geometry and project virtual
contents. Not only the surface geometry but also the skeletal model of a user is
tracked by the depth sensors so as to interact with the contents within Unity game
engine.
2.2 Locomotion in Immersive Enrivonment
One of the most intriguing aspect when developing a VR environment is how to
enable the player to navigate within it. Due to the physical constraint of a CAVE
platform, several virtual locomotion methods have been proposed. For example,
Cirio et al. [17] came up with three different solutions. In the first one, virtual signs
are rendered in the graphical environment as a metaphor of traffic signs to navigate
the user. Second, a virtual rope is rendered around the user, allowing him to move
within the virtual world by virtually pushing the rope using his hands. Finally, they
introduced Virtual Companion, i.e., a bird with virtual reins attached to it that the
user can “grasp” in order to be carried around within the world.
However, virtual locomotion systems such as the ones just cited do not require
kinetic motions of legs. To accomplish kinetic input while keeping the user on a
spot, a treadmill [18] or a low-friction surface [19] can be used. For example, Fung
et al. [20] developed a system with a stereoscopic screen and self-paced treadmill
with 6-DOF motion platform for gait training. For an entertainment application,
VibroSkate by Sato et al. [21] uses a skateboard metaphor to achieve kinetic lo-
comotion. The left leg stays on a skateboard affixed to the platform and a user
kicks a treadmill next to the skateboard by the right foot to virtually move in the
environment. Moreover, transducers attached to the skateboard produces vibration
according to the virtual speed and ground condition. Graphics are generated by
Unity game engine and projected on the front and floor screens. It is worth noting
that, by introducing a solid ring around the body of the user for safety reasons, this
technology can also be employed not only with a CAVE but also with HMDs such
as Oculus Rift for commercial applications. Examples of such systems include the
Cyberith Virtualizer [22] and Virtuix Omni.
Virtusphere by Medina et al. [23] is another example of a treadmill. Essentialy,
this device is a giant plastic hamster ball that lets the users feel like they were
walking through a virtual world. Once entered within this sphere, it is possible to
move about freely, i.e., an individual can run, jump, move from side to side and,
6
2 – Literature Review
virtually, act like it was a real-world scenario.
2.3 Haptic Devices
Haptic feedback has in general an important role in augmenting the level of immer-
sion in VR systems. There are two kinds of haptic feedback: tactile and kinesthetic
feedback. An early example of kinesthetic feedback system is Phantom [24], which
provides feedback on fingers using DC motors, giving the illusion of physically touch-
ing objects in the cyberspace. PHANToM OMNI haptic device [25], instead, is a pen
device attached to a mechanical arm, which is manifactured for research purposes.
SPIDAR by Sato [26] has a ball attached to strings; the user holds the ball, and force
feedback is provided by the tension of the strings. There are commercial devices,
especially for gaming, as well: for example, Novint Falcon 3D Touch controller. For
tactile feedback system, electro-tactile displays are proposed by Kajimoto et al. [27].
HORN Ultrasound Airborne Volumetric Haptic Display by Inoue et al. [28] is a non-
contact, mid-air tactile feedback device which uses an array of ultrasound speakers
to transmit energy to the hand. Fairy Lights in Femtoseconds by Ochiai et al. [29]
displays hologram using a femtosecond laser, and the energy of the laser gives haptic
feedback.
For haptic interaction on feet, Visell et al. [4] built a vibrotactile floor to synthe-
size virtual ground textures. There are several virtual environments proposed; for
example, the fluid simulation uses a particle system to react to a footstep simulating
bubbles for graphics and haptics rendering. In the snow example [30], the visual
effects are simulated by modifying a height map in real-time although the preset of
haptic effects does not simulate compression of snow.
2.4 Video Game and Immersion
Immersion is a word often used to describe an aspect of video games, but there is no
clear definition. According to the study by Brown and Cairns [31], while reaching
total immersion, gamers encounter several barriers. For the first stage, engagement,
there must be a motivation to play the game, then the gamer has to understand
the control of the game. Next, the gamer spend time and put effort and attention
to play. The second stage is engrossment. To emotionally involved in the video
game, its quality of, especially, graphics, tasks and plot must be well designed. The
last stage is total immersion when gamers are absorbed in the video game and no
longer care about the surroundings. This needs empathy to the character and how
atmosphere of the graphics, plot and sounds related to the game world. In the
follow-up study by Cheng and Cairns [32], participants were asked to play Unreal
Tournament, and at the midpoint, the game’s theme (environment texture, physics
7
2 – Literature Review
parameters) is changed. Surprisingly, the participants were not surprised by the
change and some of them even didn’t notice the change.
Hazlett reported a method to detect positive valence using biosignals [33]. EMG
of facial muscles are measured while playing a racing video game. Results are mostly
biological data. Game events are classified into positive (e.g., overtake) and negative
(e.g., go off road) events. They found positive emotions can be measured by EMG.
Study of movement-based video games is done by Pasch et al. [34]. In the first
experiment, qualitative analysis of Wii Sports was done by user interviews. Subse-
quently, quantitative analysis of Wii Sports Boxing is done. In the second experi-
ment, videos are recorded and 5 observers rate how close they look to real boxing.
Two strategies: Game (movement to trigger punch in the game) and Simulation
(simulate playing boxing). The game strategy led to high frequency of punches but
with small motion and less engagement. The simulation strategy involves defense
motions although they are not necessary. This strategy is used when gamers want to
relax. Immersion happens when the player feels empathy with the avatar mimicking
the motion.
8
Chapter 3
Infinite Runner
The aim of Chapter 3 and Chapter 4 is to report in detail the work carried out
with the present thesis. In particular, in the following paragraphs we will focus on
Infinite Runner, an endless running game developed with the intent of exploring
the potentiality offered by the haptic floor. We will begin by providing a technical
overview of the VR environment in used, and then subsequently focus on the features
of the game. Finally, in the last part of the chapter we will present the results of
the conducted experiment.
3.1 System Architecture
Figure 3.1: High level architecture diagram of the system
The system in use is a composition of different subsystems interacting between
them. Figure 3.1 shows an overview of all the main components involved:
9
3 – Infinite Runner
• CAVE: immersive virtual reality environment, consisting of 3 large screens
on which are projected the images generated by a VR application. The en-
vironment also includes a motion capture system consisting of eight different
cameras placed on top of the CAVE’s frame; this system allows us to track
the user movements.
The haptic floor is housed within this environment and, not only it acts as
a fourth display surface, but it also provides a realistic multimodal feedback
that enhances the overall immersion.
• Mac mini array: array of six Mac mini computers aimed to receive data from
both the haptic floor and a Windows machine running our videogame. Each
computer is responsible to synthesize haptic feedback for one row of six tiles
based on the incoming data.
• Graphic manager: Windows computer with the task to execute our infinite
runner videogame. The information coming from the motion cameras and the
haptic floor is used as an input to update the state of the game. The game
status is constantly sent to the Mac mini array, which will be used to update
the haptic feedback in real time.
In the following sections we will discuss in more detail the characteristics of
the haptic floor and the motion capture system, and how they interact with our
application.
3.1.1 Haptic floor architecture
The haptic floor is a complex system consisting of many elements both hardware
and software. The aim of this section is simply to give a brief, albeit detailed,
overview of the components shown in the diagram in Figure 3.2. To have a much
more technical analysis of the system, the reader is invited to have a look at the
works done by Visel et al. [4] [30] [35], the creators of the vibrotactile floor.
The haptic floor consists of a 6×6-squared-tile-surface, each tile containing four
FSR sensors, one in each corner, and a tactile transducer in the middle. This means
that the system has in total 144 sensors and 36 transducers. The sensors are used
in order to detect the force that a user standing on the floor is currently applying
on it, while the transducers are used to execute the synthesize sounds that should
simulate the haptic feedback by meaning of the resulting vibrations.
All the sensors within a row are connected to one Gluion1
unit, which includes 4x6
Analog-to-Digital Converters (ADC) sampling FSR voltages and converting them
into a numerical format. Each unit also comprehends an internet interface that is
1
http://www.glui.de/
10
3 – Infinite Runner
Figure 3.2: Haptic floor architecture diagram
used to broadcast all the raw values via User Datagram Protocol (UDP) through
the Open Sound Control (OSC) protocol2
. In order to acquire data from all the
sensors, six different units are employed, controlling 24 sensors each.
2
http://opensoundcontrol.org/
11
3 – Infinite Runner
OSC is a protocol for communications between computers, synthesizers, audio
and other multimedia devices. It was designed to support a client/server architec-
ture. Each OSC message consists of three parts:
• Address pattern: Arbitrary sequence of characters preceded and possibly in-
terspersed by a “/”. It represents the name of the message specified by the
client and, through the use of the delimiter, it is possible to create hierarchies
of messages according to the model directory/file.
• Type tag string: String whose sequence of characters is used to specify the
type of the data sent. It is not mandatory to indicate the nature of the data,
but it is highly recommended.
• Arguments: Data contained in the message.
All the packets generated by every Gluion unit have an address pattern equal
to “/analog", and contain the 24 values outputed by the ADCs. The recipient of
these packets is an array of six Mac mini computers, each of which is responsible in
managing the data coming from a single unit. This means that each unit sends data
always to the same machine, i.e., the Gluion receiving data from the sensors placed
in the first row sends data to the first Mac mini, the second Gluion sends data to
the second Mac mini and so on.
Each Mac mini is responsible for generating haptic feedback through a Max/MSP3
patch only for the row of six tiles from which it receives the data. However, we
must note that the information regarding which feedback should be generated for
each individual tile and what intensity to use are not coming directly from the
Gluion units. All the PCs constantly execute a program written in Java lan-
guage with the aim of accepting new inbound OSC messages, but these are not
directly parsed: the program simply rebroadcasts the incoming “/analog” mes-
sages from the Gluion units to a NIW Server after updating the address pattern in
“/niw/server/update/row/Mac_mini#”, where Mac_mini# is the machine num-
ber from which the message is being sent. In the terminology of the architecture,
a computer running this program is said to be a “NIW slave"; all the Mac minis
belong to this category.
The NIW server is none other than Mac mini #1 which, in addition to executing
the program that makes it a slave machine, is also running another program that
allows performing simple filtering and analysis operations on the base of the incoming
pressure data from all the Gluion units (via the various NIW Slave instances).
One of the most important operations executed by the server is to convert the
incoming data from a raw format (i.e., one data point per sensor) to a tile format
3
https://cycling74.com/
12
3 – Infinite Runner
(i.e., one data point per tile); in this way, each row has only six associated values
instead of 24. Exploiting the fact that Max/MSP has a native support for OSC,
these values are inserted as arguments in an OSC message, and sent to the Max/MSP
patch run on the computer in charge of managing the row to which they refer (e.g.,
the six values of the first row will be sent to the patch running on Mac Mini #1,
the six values of the second row will be sent to the patch running on Mac Mini #2,
etc.). The pressure values will be used by the patches as parameters for the physical
model employed for the generation of the selected haptic feedback; the latter is just
an audio signal that, using some analog connections, will be sent and reproduced
by 36 haptic transducers placed underneath the tiles (one for tile).
In order for this system to work correctly, it is essential that the patches present
on the each Mac mini send an OSC message to the server to inform it on which
address pattern to use while sending pressure data. For each Mac mini, the server
will save the pair Mac_Mini_#/Address_Pattern, so that to be able to notify all
the machines whenever the floor status is updated.
The server does not exchange information only with the other Mac minis, but
also and especially with a Windows machine responsible for the execution of our
VR applications by means of Unity game engine. In particular, the server notifies
the running application about the changes in the floor status by sending many OSC
messages: for example, by analyzing the raw data coming from the sensors, it’s
possible to state whether someone is standing on the floor and the position of his
feet within the CAVE, and even if he makes a jump (a detailed description of how
the system can detect when someone standing on the floor performs a jump will be
given in Section 3.3.3). All the data contained within the OSC messages will be
used as inputs by the application to update its status.
In order for the server to be able to properly communicate with the application, it is
necessary that the latter sends during the startup phase a message indicating which
address pattern should be used when sending notifications about one specific event.
This means that it will send as many messages as the number of services needed
from the server: for example, if the application requires to be notified whenever a
user is standing on the floor and when he performs a jump, it will send two OSC
messages indicating two different address patterns.
Finally, the application has the ability to define what feedback should be gener-
ated by each individual tile whenever a user will step on it. This is done by sending
an OSC message to the server containing for each tile the name of the feedback
preset to be associated with it. The server will then forward the information to all
the Max/MSP patches. Moreover, the application has also the possibility to trigger
a neutral feedback from a specific tile even when no one is stepping on it. For more
details on how our application exploits the potentiality offered by this new designed
system, the reader is invited to view Section 3.3.5.
13
3 – Infinite Runner
3.1.2 Motion capture architecture
Figure 3.3: Motion capture system architecture
In addition to the haptic floor, the other key element of our architecture is the
motion capture system used to track the player movements. As depicted in Fig-
ure 3.3, the setup consists of eight Vicon Bonita B10 motion cameras4
arranged on
top of the CAVE frame in a strategic order so that to “capture” all the space con-
tained within it (i.e., the cameras are facing the haptic floor). These high resolution
cameras emit special strobe lights, which is reflected back by small spheres (markers)
covered with a retro-reflective substance; as showed in Figure 3.4, those markers are
placed on a headset that can easily be worn by the user whose movements we want
to track.
The reflected light is captured by each single camera, and the resulting images
are sent via ethernet to the Windows machine executing the specialized software
called Vicon Tracker. The aim of this application is to locate the markers seen
through the cameras, and to record them as 3D coordinates. The markers placed
on the headset are defined in the application as a rigid body, i.e., as a virtual object
composed from a specified set of markers with a relatively fixed proximity to one
another. In other words, those markers are considered as a whole and not as single
objects.
4
http://www.vicon.com/products/camera-systems/bonita
14
3 – Infinite Runner
Figure 3.4: Headset with reflective markers on it used to track the user movements
One of the most useful features of Vicon Tracker is to have a built-in Virtual-
Reality Peripheral Network5
(VRPN) server, through which the application will
stream natively the position and orientation data of all the defined rigid bodies; in
our case, the only data that are broadcasted are the ones associated to the headset.
However, these data cannot directly be received by our video game built using Unity,
unless it is not equipped with a VRPN client. Taking advantage of the fact that
our application already contains an OSC client for receiving data from the haptic
floor, it was decided to exploit the functionalities offered by the Vrpn-OSC-Gateway6
project and, doing so, standardize the system so as to have to deal only with OSC
messages. Simply speaking, this small application receives the tracking data directly
from Vicon Tracker, coverts them and, finally, sends them as OSC messages. As we
will see better in the coming sections, the data contained within those messages are
essential for the correct functioning of our video game.
5
https://github.com/vrpn/vrpn/wiki
6
https://code.google.com/p/vrpn-osc-gateway/
15
3 – Infinite Runner
3.2 Game Design
In this section it will be given a thorough description of the main characteristics
of the VR application and an explanation of the reasons that have lead to the
development of a video game of such a genre rather than another one (like an open
world video game).
3.2.1 How to play
Figure 3.5: In-game snapshots of Infinity Runner
Infinite Runner falls under the genre of infinite running games, in which the
virtual character is continuously moving forward through a procedurally generated,
theoretically endless game world. In the game, the player controls a soldier who,
having broken into a castle to steal its treasures, is chased by a dragon that wants
to burn him alive. The goal of the game is to collect as many coins as possible while
avoiding all the obstacles that will be encountered along the way. In Figure 3.5 it
is possible to observe some screenshots of the final version of the game.
The application was developed using the toolkit called “Infinite Runner Starter
Pack”7
, which provided us with an already functioning gaming system. The choice
of using this Unity Asset instead of afresh developing an entire new game system
was dictated by the fact that the thesis’ main purpose was to enhance the user
experience of a game that had already been considered fun and immersive when
played in a regular mode, i.e., played on a computer or on a mobile device. This
7
https://www.assetstore.unity3d.com/en/#!/content/8949
16
3 – Infinite Runner
decision made it possible to let us concentrate on how to make the most out of the
available immersive VR environment rather than spending time designing the game
itself.
With the original version of the game (i.e., the one played using a normal com-
puter and a keyboard) the player, sitting comfortably on a chair, is able to press
either the left or right arrow key in order to move the virtual character to the left or
right for collecting coins or avoiding objects. When he needs to turn left or right at
a crossroads, he can simply press the arrow key in the corresponding direction. If he
wish to jump over an object, he can press the up arrow key, while if he wish to slide
under an object, he can press the down arrow key instead. What we want now is
the player to be able to play the game in a much more immersive way, allowing him
to experience the functionality offered by the SRL’s CAVE. As a consequence, all
the actions just listed will no longer be simply performed by the virtual character
as a reaction to a key pressed on a keyboard, but the player himself will have to
phisically execute them. In other words, the player has to impersonate the virtual
character:
• First, the player should have the possibility to move to the right or to the left
within the perimeter of the haptic floor so as to make the approaching coins
shown on the screen “hit” his body and, doing so, collect them all; similarly,
his movements should also allow him to avoid the obstacles.
• The player should be able to turn to the right or to the left at a crossroads in
an intuitive way whenever necessary.
• Finally, in order to avoid some particular obstacles the simple movement may
not be enough. In all these cases, the player should be able to either jump
over or slide under the obstacles, according to the necessity.
In addition to all these features, the player should also be provided with a tactile
feedback from the vibrotactile floor as a consequence of his actions, so as to make
his experience as much immersive as possible. The following paragraphs are devoted
to present a comprehensive analysis of how all these aspects of the game have been
implemented using the data coming from the motion cameras and, above all, from
the haptic floor.
3.2.2 Motivations
The decision to develop an endless running game was not casual, but it was dictated
from an intrinsic limitation of any CAVE environment: the walking area is restricted
by the physical space. As mentioned in Chapter 2, there are several solutions to
overcome this problem, with most of them requiring the use of a special pointer. As
a result, the player does not physically move, but he can simply press a button to
17
3 – Infinite Runner
achieve the desired result. In my opinion the use of such a device causes a break
in presence (BIP) from the virtual environment. This led me to implement Infinite
Runner, in which it’s the virtual world that moves around the player; although the
latter does not have the freedom to physically navigate within the virtual world, the
sensation of having an environment around him that keeps moving over time will
provide him a feeling of movement, as if he really was running in that world.
3.3 Implementation
3.3.1 Frustum Update
Figure 3.6: Camera frustums (left) and rendered scenes (right). The rendered scenes
are montages of 4 cameras: left, front, right and floor. On the top row, the child
node (i.e., the eye position) is located, and thus the vanishing points on the rendered
scenes are in the center of each images. On the bottom row, contrarily, the vanishing
points are shifted due to the asymmetry of the frustums.
In an immersive system, virtual objects must be rendered in a certain method
so that the viewer can perceive a parallax effect. In order to do so, the physical
setup must be correctly mapped to the virtual environment. The unit lengths in the
game engine and the motion capture system is meter. The motion capture system is
calibrated so as to have its origin located at the center of the floor and it is tracking
the user’s head position. In the virtual environment, there are parent and child
nodes using a scene graph. The parent node represents the physical origin (i.e.,
the center of the floor), and it can be moved to an arbitrary position in the virtual
18
3 – Infinite Runner
environment to “teleport” to another position. The node has a bounding box, which
has a fixed dimension of 2.4 m × 2.4 m × 2.4 m, centered at 1.2 m height from the
physical origin. The bounding box is defined to represent the physical screens of
the CAVE. The local position of the child node is updated to be the tracked head
position.
For graphics rendering, since the setup consists of flat rectangle screens, camera
models can be defined for each screen [12]; for our setup, there are four of them:
on the left, right, front and bottom of the user (Figure 3.6). The camera frustums
must be updated with respect to the user’s eye positions. For a monoscopic setup,
the camera position must be at the midpoint of the eyes. Therefore, the camera
position is approximately the position of the child node. The near clipping plane
is a plane of the bounding box which represents the physical screen. In practice,
virtual objects can exist closer than the physical screen. Thus, the top, bottom,
left, right and near parameters are multiplied by x < 1 to make the near clipping
plane closer to the viewpoint to render such objects (for Infinite Runner, we use
x = 0.0625).
3.3.2 Character Movement
Figure 3.7: Haptic floor (left) and virtual floor (right), showing the range of values
that can be assumed by them. It was only taken into account the x-axis as the
player has control only over the virtual character movements along this axis.
After implementing a system that allowed us to obtain a perspective correction
based on the user’s head position, the next problem that must be taken into account
is how to use the data coming from the motion cameras to make the invisible vir-
tual character move according to the player’s movements. This is mainly done by
performing a mapping between the physical and virtual coordinates. As depicted
in Figure 3.7, the motion tracking values range between −1.2m and 1.2m, with the
haptic floor enclosed in the area −0.9m/0.9m; each tile has a length of 0.3m. The
19
3 – Infinite Runner
virtual coordinates, instead, vary between −2 and 2, with a sensitivity of 0.1. This
represents the amount of motion that the virtual character is able to perform at a
time, which cannot move at will from one position to another; for example, it is not
possible to make it move from position 0.5 to position 1.0 in one single movement,
but it will take five consecutive translations along the x-axis to reach the final des-
tination. In other words, assuming to play the game with a keyboard, this is the
movement that we would get any time we press an arrow key. The virtual character
can then be placed in 40 different positions within the virtual world. It was decided
to divide the virtual world in this specific number of positions as a compromise be-
tween having a fairly smooth movement and limiting the amount of delay introduced
(delay due to the non-immediacy of the movement from one position to another one
not directly accessible).
Whenever a new packet is received from the motion cameras, the first operation
that is performed is to divide the coordinate value along the x-axis by 1.2, so that it
will assume a value between −1 and 1. The next step is to check wheter this value
is contained within the range −0.9/0.9 and, if so, multiply it by 0.045, remembering
to take into consideration its sign; this number represents the ratio between 0.9 and
20, and it is used to define which position the virtual character should be placed
in (position ranging between −20 and 20). In case the value is out of boundaries,
and this can happen if the player is on the black frame limiting the haptic floor, the
virtual player is assumed to be either on the rightmost or leftmost side of the virtual
world (i.e., either in position −20 or 20), depending on the sign of the coordinate
value. The value of the new computed position will now be compared with the
current one, and, if they are different, the virtual character will be shifted either
to the left or to the right depending on the sign of the subtraction; for example, if
the current position is equal to 12 while the new one is equal to 14, we will have
14 − 12 = 2, corresponding to two consecutive translation to the right.
Turn Left/Right
Correlated to this argument, another problem that needs to be solved is how to
let the player turn either to the right or to the left when facing a crossroads. We
thought about many solutions, among which the employment of the speech recog-
nition functionality offered by Kinect so that the player would have been able to
just say “Go Left” or “Go Right” in order to trigger the virtual character turning
ability; however, this solution proved to be unfeasible mainly due to the big delay
introduced and the small intuitiveness of the system.
We have therefore decided to take advantage of the character movement system
described above as follows: whenever the virtual character is in a position greater
than or equal to 14 (i.e., whenever the player is physically located in the rightmost
column of the haptic floor), its turning ability to the right will be triggered. Sim-
ilarly, the turning ability to the left will be activated every time that the virtual
20
3 – Infinite Runner
character is in a position lower than or equal to −14. All the logic used to check
whether the character can really turn along a specific direction is directly provided
by the original Unity asset. Here, it is enough to say that it is possible to turn
along a certain direction only if we are currently standing on a turn platform (i.e.,
a virtual platform provided with a crossroads), and, especially, if the intersection
allows us to go along that specific direction (it’s not possible to turn to the right if
the only available road is to the left).
This solution may not be really natural at first, but in our opinion it was the
best solution we could implement without creating any BIP from the virtual world.
In any case, all those who had the chance to try out the system were satisfied, saying
that the idea was very intuitive after getting used to it.
3.3.3 Jump Detection
Figure 3.8: Sequence of actions in a
standing vertical jump
Figure 3.9: Vertical jump as seen by the
motion cameras and the FSR sensors
When playing an infinite runner game, one of the most important ability which
the player can rely on is the possibility to avoid a collision with an obstacle by
jumping beyond it. In our game this ability is necessary in order to avoid obstacles
such as chairs and tables, and most importantly to avoid ending up in the fire that
would lead to lose the game.
The detection system must comply to three constraints:
1. It must be as fast as possible, otherwise the player would not be able to
promptly avoid obstacles due to latency.
2. It must not require too much computational power, i.e., it must not influence
the frame rate of the game.
21
3 – Infinite Runner
3. It must be very difficult to have false positives, i.e., the player has to really
jump to actually trigger a jump for the virtual player.
Before describing how the jump detection system was implemented in our game,
it is important to accurately understand and analyze the mechanics of the vertical
jump. As described by Linthorne [36] and Boukhenous et al. [37], the jump move-
ment can be seen as a composition of different sub-actions which are summarized
in Figure 3.8: first of all, the jumper, starting from an upright standing position,
makes a preliminary downward movement by flexing at the knees and hips until
reaching an angle of about 90°; afther that, he immediately extends the knees and
hips again to jump vertically up off the ground. Such a movement makes use of the
“strech-shorten cycle”, where the muscles are pre-streched before shortening in the
desired direction.
Finally, the drop phase is performed with the knees extended, on tiptoe, with sub-
sequent cushioning to prevent any trauma.
In Figure 3.9 you have a reprensentation of a vertical jump from the point of
view of the motion cameras (blue line) and the FSR sensors placed underneath the
floor (red line). Analyzing this plot, it is possible to define several key times and
phases of the movement:
• A: Initial stage of the jump. The user is standing in upright position and
stationary; the position received from the motion cameras matches the height
of the user (since the markers are placed on top of the helmet put on his head).
• A-B: The jumper relaxes his leg and hip muscles, allowing in this way the
knees and hips to flex under the effect of the force of gravity; as a result, the
user starts to move downward, causing a reduction in both the force applied
on the floor and the position of the head.
• B-C: Boost phase, during which the user increases the force applied on the
floor. However, the user continues to move downward, causing a reduction in
the head height. C is the point in which the acceleration is zero, i.e., muscular
strength = weight force.
• C-D: The resultant force applied on the floor is now positive, and as a result
the user starts accelerating upwards. Note that even though the accelera-
tion is positive, the user still continues to move downward. In D you have
the maximum acceleration caused by the expression of the maximum muscle
strength.
• D-E: This is the so called "pushoff phase", where the subject extends the
knees and hips and starts moving upwards; the force applied on the floor
starts declining rapidly, while the head height is increasing. In E you have
that the muscle strength is equal to the weight force.
22
3 – Infinite Runner
• E-F: Phase in which the force exerted by the muscles becomes lower than the
weight force, giving rise to a negative acceleration.
• F-G: Ascending phase of the jump, during which the subject is still moving
upwards, but he has started to slow down as a result of the force of gravity.
The user’s feet are not anymore in contact with the floor.
• G-H: Descending phase of the jump, where G is the peak of the jump, and
H is the instant at which the user’s feet are back in contact with the floor,
causing a peak on the force applied on it. The user flexes his hips and knees
in order to cushion the landing. Eventually, in the moment that the user is
back to be motionless on the floor, the force applied on it and the head height
return to be equal to the values registered at the initial instant A.
During the phase F-H the acceleration assumes a constant value equal to the
gravitational acceleration.
In order to be sure that a jump has always similar characteristics to those just
described, a small experiment was conducted in which there was asked to a partici-
pant to stand stationary within the CAVE in a standing position for few moments,
and then to randomly move and perform some jumps within it; along all the test,
the data coming from the motion cameras and the FSR sensors underneath the floor
have been recorded.
In Figure 3.10 there is a graphical representation of the recorded data. In the
upper figure we have for each frame the value captured by the motion cameras,
while the lower one shows for each frame the sum of all the values sensed by the
FSR sensors. In both figures the moments in which the participant has performed
a jump have been highlighted.
By carefully analyzing these graphs, it is possible to notice how each jump always
presents the same pattern: first of all, there is a downward movement, which causes
an increase of the force applied on the floor; this is followed by an upward thrust
phase, which takes the user to no longer make contact with the floor for a few
moments (generating a rapid reduction of the FSR values); finally, there is the
landing phase, in which the user returns to make contact with the floor, generating
a peak in the force applied to it.
However, it is unthinkable to identify a jump only after it is completed, because
this would introduce a lot of latency in the game: even if the player physically
executed a jump in time to avoid an obstacle, the virtual player would perform it
much later, and this would probably lead to a collision. It is therefore necessary to
extract some features from the graphs that can be used to correctly identify a jump
in the shortest possible time.
First, it can be observed from both charts how it is very difficult to identify when
a jump starts, since the data for this phase gets confused with the ones you get when
you move inside the CAVE. A similar observation can be made for the final phase
23
3 – Infinite Runner
Figure 3.10: Captured data of a participant running and jumping around the haptic
floor
of the jump.
Secondly, it is essential to stress out a peculiarity of the FSR values: as long as
the user has at least one foot on the floor, the data that is obtained as output has
always a value quite high. The moment that the user stops exerting pressure on
the floor (i.e., during the ascending phase of the jump), the output data suddenly
drops and assumes a very small value for several frames8
; this value will eventually
return to have a big value after that the user is landed and returns to exert contact
on the floor. This is the key feature on which the jump detection system is based:
after that the game has started, the server calculates for each frame the summation
8
Due to the weight of the each tile, FSR sensors always sense a force different from zero.
Futhermore, drift current increases the output voltage.
24
3 – Infinite Runner
of all the FSR sensor values. If the sum is above a certain threshold, something is
exerting pressure on the floor, i.e., a player is assumed to be on it. When the sum
returns below this threshold, and remains so for a defined number of frames, it is
presumed that the player is currently in the ascending phase of a jump, and the
event is immediately notify to Unity by sending an OSC message. However, the fact
that the sum is returned to be less than the threshold could simply mean that the
player has stepped out of the floor, and therefore no one is performing any jump. To
avoid a false positive detection, when Unity receives the message from the server it
is necessary to check the data coming from the motion cameras: If a steady increase
along the y-axis has been registered in the last frames (i.e., the head position is
moving upward), then it is possible to state that the player is actually jumping. As
a result, a jump action will be triggered for the virtual player and, hopefully, it will
be able to avoid the obstacle. In Figure 3.11 it is depicted a simple flowchart about
all the operations just described.
Figure 3.11: Flowchart showing the main operations executed by the server (left
side) and Unity (right side) to correctly detect a jump
In order make the system as efficient as possible, we need to carefully define
the threshold value: a value too low would not allow us to correctly identify many
jumps, while a value too big would cause too many false positives. Moreover, it is
also necessary to define the number of frames that must be analyzed to be able to
25
3 – Infinite Runner
correctly state that the player is no longer applying force on the floor. After several
tests performed with the SRL components, it was decided to use the empirically-
chosen values 25.000 and 5 for the threshold and the number of frame, respectively.
It is good to point out that the time when the player starts a jump and the one
in which it is correctly identified do not coincide; this generates the introduction of
a delay in the game, albeit content.
3.3.4 Slide down detection
Figure 3.12: Sequence of actions in
a squat movement
Figure 3.13: Squat movement as seen by the
motion cameras and the FSRs
Thanks to the jump detection feature described in the previous section, the
player is now able to avoid most of the obstacles that can appear within the game
world. However, for all those who have played at least once an infinite running
game, it should be known that some obstacles can be avoided only by passing under
them rather than above. Even for our game it is essential to have this ability, so
that obstacles such as chandeliers or spikes traps can be successfully avoided.
When playing with a computer, the ability to slide under an obstacles is gener-
ally triggered by pressing the down arrow key. Since in our game the user himself
is considered to be the controller, he is the one supposed to make a downwards
movement, so that this can be mapped to the virtual character. As depicted in
Figure 3.12, this movement consists of three main phases: the player, in a standing
position, performs a downward movement by bending his knees, and then subse-
quently executes an upwards movement so that to return in the starting position.
This sequence of movements is very similar to the one followed during a jump: in
this case, however, the player, during the upward movement, does not exert a force
26
3 – Infinite Runner
strong enough in order to overcome the force of gravity and, as a consequence, his
feet keep making contact with the floor for all the time. This type of movement is
basically the one executed during the squat exercise9
.
As already done for the jump detection system, in order to understand how to
correctly identify when the player is trying to trigger the slide down ability of the
virtual character by making a downward movement, a small experiment was run
during which a participant was asked to execute within the CAVE the sequence of
movements described just above. Throughout all the test, the data coming from the
motion cameras and the FSR sensors have been recorded in order to then be able
to graphically represent them.
In Figure 3.13 it is shown a representation of such data. By carefully analyzing
this plot, it is possible to define several key points of the movement:
• A: At the beginning the participant is standing stationary in an upright posi-
tion. As a result, the force applied on the floor is constant, while the position
received from the motion cameras matches the height of the user.
• A-B: The participant starts relaxing his leg and hip muscles, allowing in this
way his knees and hips to bend under the effect of the force of gravity. This
cause a drop in the force applied on the floor, while the head position keeps
remaining for few moments unchanged.
• B-C: Free fall phase, during which the participant starts executing a down-
ward movement, leaving the sole force of gravity to act on him. This cause a
reduction of both the force applied on the floor and the head position value.
• C-D: The participant exerts a force on the floor so that to slow down the fall.
However, he keeps moving downward, causing a reduction in the head height,
until he will eventually assume a crouched position.
• D-E: During all this phase the participant remains stationary in a crouched
position. The head position is constant, while the force sensed by the FSR
sensors keeps increasing in order to assume a value similar to the initial one.
• E-F: Boost phase, during which the participant starts doing an upward move-
ment by exerting a force on the floor, so that to return in a standing position;
in G you have the peak of this force.
Eventually, both the force applied on the floor and the head position will
return to have the values recorded in the istant A.
9
https://en.wikipedia.org/wiki/Squat_(exercise)
27
3 – Infinite Runner
After defining the details of the dynamics of the movement, it was decided to run
a further experiment along the lines of that carried out for the jump detection. In
particular, a participant was asked to execute some jumps and squats while moving
within the CAVE; this was done in order to be sure that the two movements can be
easily distinguished from one to another, and, more importantly, to identify some
features that allow us to correctly detect when the participant is in a crouched
position (so that to trigger the slide down ability of the virtual character).
The data recorded during all the experiment can be observed in Figure 3.14. In
the upper figure we have for each frame the value captured by the motion cameras,
while the lower one shows for each frame the sum of the values sensed by any FSR
sensor. In both plots it was highlighted the moments in which the participant has
performed either a jump (light blue) or a squat (light gray).
Figure 3.14: Capture data of a participant executing different movements on the
haptic floor
28
3 – Infinite Runner
The first thing to notice is that the two movements present totally different
characteristics: considering the value received by the motion cameras, during the
ascending phase of the jump this value results to be much above the normal one
(i.e., the participant height); on the contrary, during the execution of a squat, it
assumes a value much below the normal one.
Considering instead the FSR sensors data, the execution of a jump brings to a
sudden reduction of the sum of the values sensed by them; as explained in the
previous section, this is the main characteristic on which the whole jump recognition
system was based. As for the squat movement, instead, the haptic floor does not
provide any significant information. Analyzing the plot, it is possible to observe
how the data contained in the highlighted areas in blue have a similar pattern to
the one that occurs when the participant is simply walking within the CAVE (non-
highlighted areas); these areas, in fact, do not have any values much greater than the
normal one (i.e., the force sensed while the participant is stationary on the floor),
nor much smaller. This means that the data coming from the floor cannot be used
to effectively detect the desired movement.
For this reason, it was decided to base the slide down recognition system only
on the motion tracking data. Looking closely at the plot in question, it is possible
to observe that the execution of a squat results in a significant reduction of the
received value; as long as the participant remains in a squatting position, his height
turns out to be much smaller than the normal one, or the one associated with a
jump. Starting from this observation and exploiting the idea already introduced
in the jump detection system, it was decided to define a certain threshold also for
the identification of the squat movement: after that the game has started, a script
defined within Unity begins to monitor the participant’s height. As long as it stays
above the threshold value, it is assumed that the player is simply walking within
the CAVE, or that he is executing a jump; however, as soon as the input value goes
below the threshold, and remains in this state for several frames, it is then assumed
that the player is performing a squat. This will lead to the trigger of the slide down
ability of the virtual character.
However, it should be obvious that the efficiency of such a system all depends
on the chosen threshold value; a value very close to the normal one would lead to
the identification of many false positives, while a value too small will not allow the
correct identification of many squats. In addition, it is also essential to define what
is meant as “normal height”, because not all the people are the same height: a
threshold value that may be fine for a adult will never work for a child. In order to
solve these problems, it was decided to proceed as follows:
• Before executing the game, the player, wearing the helmet with the markers
on it, is asked to stand stationary in an upright position within the CAVE.
• Just after the start of the game, the procedure with the duty to monitor the
29
3 – Infinite Runner
motion tracking data saves the value received as the first frame. This will be
considered to be the height of the player, i.e., the “normal height”.
• Finally, the threshold value is computed starting from this value:
squat_threshold = normal_height × 0.77 (3.1)
In other words, the threshold value is equal to 77%10
of the normal height.
Taking as an example the data showed in the plot, the height of the participant
is approximately equal to 1.65m; as a consequence, the threshold value associated
to him will be: 1.65m×0.77 = 1.27m.
The system was widely tested by the SRL members. In general all were satisfied,
calling it very intuitive and easy to understand; the adaptive threshold value has
guaranteed the achievement of excellent results for all participants, even when the
executed movements were not so pronounced. The system has proven to be able
to correctly distinguish whenever a participant was executing a jump, a squat or
simply moving within the CAVE.
The system was even tested by one laboratory members’ little son (around 8 years
old) . The small child understood very quickly how to play the game, and he had a
lot of fun during the experience. Even in his case, the system worked as expected,
distinguishing efficiently all his movements.
3.3.5 Haptic feedback
With the introduction of the slide down detection system, it is now possible to play
Infinity Runner within the CAVE environment. However, in order for the player
to have the most immersive experience possible, we still want to introduce a haptic
feedback that could enhance the game play of our VR application. After performing
an in-depth analysis of the game’s features, it was decided to include a haptic effect
whenever a specific game event was encountered. In particular, the game events
that have been taken into consideration are the collection of a coin and hit to an
obstacle, for which we want to take advantage of the functionality implemented in
this regard within NIW server, i.e., the one that allows us to trigger a neutral sound
at will from either some specific tiles or from all the 36 tiles at once. To exploit this
function, we have to execute the following operations:
• First thing first, as soon as the game is started, we need to send a series
of OSC messages to NIW server, one for each game event that we want to
consider; in our case, then, we need to send two messages. Each of these
10
Empirically chosen.
30
3 – Infinite Runner
messages, addressed towards a very specific address pattern, will contain two
arguments: a string, representing the type of game event, and a number,
specifying the intensity of the feedback that we want to associate to that
event (the greater the number, the stronger the feedback). In the specific case
of Infinite Runner, the two messages will contain the pairs (“Coin”, 2) and
(“Obstacle”, 5), respectively; anyway, it is possible to define as many game
events as wanted.
The server, analyzing the address pattern of each received packet, will be able
to correctly interpret them, going to add in a suitable data structure the data
contained therein.
• After this preliminary operation, it is possible to start playing. Whenever the
player collects a coin or collides with an obstacle, an OSC message will be sent
whose parameters and address pattern depend on the type of the feedback
that we want to be generated:
– If we want to trigger the haptic effect from just a specific tile, the message
will contain three parameters, i.e., the string “Coin” or “Obstacle” (de-
pending on the event that occurred), and two integer numbers, indicating
the x and y coordinates of the tile.
– Instead, if we want the haptic feedback to be generated by all the tiles,
the message will only contain the string “Coin” or “Obstacle”.
– The last available option, which is the one used by Infinite Runner, allows
us to generate a haptic effect from the tiles on which the player is currently
located (this information is on the server itself). Also in this case, the only
argument contained in the packet will be the string “Coin” or “Obstacle”
as in the previous case, but the adress pattern is different.
Since packets have different address patterns depending on the service re-
quested, NIW server will be able to distinguish and interpret them in an
appropriate manner. In any case, regardless of the type of the packet, the
first operation that is always performed by the server is to check whether the
string contained as first parameter is within the data structure previously cre-
ated and, if so, retrieve the volume level associated to that specific game event.
This number will be sent via OSC message to the Max/MSP patches running
on the different Mac minis in charge of the rows management cointaining the
tiles from which we want to receive the haptic effect.
In addition to the haptic effects generated as a result of game events, we also
tried to exploit the capabilities offered by the system in providing different haptic
textures (see Chapter 2 for the details). Since Infinite Runner takes place in two
different virtual worlds, i.e., inside and outside of a castle, it was decided to associate
31
3 – Infinite Runner
a different haptic texture to the floor depending on whether the player is on the
indoor or outdoor platform; in particular, the latter correspond to none (i.e., no
haptic texture) and ice texture, respectively. The haptic texture associated to each
tile is dynamically determined based on the virtual world by a raycast method. Here,
the scene graph defined in Section 3.3.1 is used. In the parent node, which represents
the CAVE, 36 child objects are instantiated at the position of each haptic tile. The
position of each child object is lifted by a constant height h, and a ray is casted
downwards. The first object hit by the ray, which is most likely a virtual ground
plane, determines the haptic texture of the tile. If the hit object is the outside
platform, the haptic texture associated to the tile is switched to ice; otherwise, it is
set to none.
The status associated to every tile is stored in a 6x6 matrix, which contains
for each element the word “ice” or “none”. If the values contained within this data
structure are different from the previous ones, this will mean that at least one haptic
texture has changed. As a consequence, all the 36 values are inserted in an OSC
message and sent to the NIW server, which will be responsible for notifying all
the Max/MSP patches via OSC messages regarding what haptic texture should be
associated to each tile. It was decided to always send all 36 values in order to
optimize the number of sent packets: NIW server will always receive a single packet,
regardless of the number of tiles that have a new associated haptic texture.
3.4 Experiments and Results
The experiment is intended to explore the role that haptic feedback can play in
enhancing a player’s experience and performance in a video game, and what elements
of the game may benefit the most from the addition of such feedback. This was done
by having participants play the video game implemented in this chapter. As already
described in the previous sections, the objective of the game is to collect virtual coins
approaching the character while avoiding obstacles. The game was played both with
and without the addition of haptic feedback delivered to the participants’ feet via
the floor. This feedback was provided both in response to user movement around
the floor, so as to generate the feeling of a virtual ground texture, and in response
to collisions with objects in virtual world.
In order not to make the game too much challenging, for this first phase of
the experiment it was decided to employ a simplified version of the game in which
participants only had to move to the right or to the left so as to collect coins or
avoid obstacles.
32
3 – Infinite Runner
3.4.1 Methology
Measures
Within the experiment, both quantitative and qualitative data were collected so as
to determine any change in the user experience due to the introduction of haptic
feedback through the vibrotactile floor. In particular, participants’ performances
were examined by collecting in-game metrics, such as number of collected coins and
avoided obstacles, and, moreover, participants were evaluated using physiological
and psychological information. It is in fact possible to detect participants’ emo-
tional states at a certain time by observing their biological data such as the skin
conductance, the heart rate, etc. [38]. Participant actions were also videotaped to
see their behavior to different game events. Regarding psychological measurements,
participants were requested to complete three different questionnaires. All the de-
signed questionnaires can be found in Appendix A.
Procedure
The procedure was as follows:
1. Upon arrival, participants were given the consent form to read.
2. After agreeing to participate in the experiment, participants were shown a
small video presentation11
on how to stand on the floor and how to play the
video game.
3. Participants were asked to complete a pre-test questionnaire so as to under-
stand their background.
4. Participants were asked to wear some small biosensors on their fingers and
a band around the abdomen to collect physiological data during the experi-
ment, consisting of body temperature, heart rate, skin conductance and respi-
ration rate. The biosignal sensors are medical grade devices manufactured by
Thought Technology12
and were wiped clean with a disinfectant between uses.
5. Participants were asked to wear a small headset on their heads for motion
tracking purposes.
6. Participants were asked to stand on the floor so that the game could begin.
11
The video can be seen at https://vimeo.com/152045111
12
Detailed information on the used system can be found at http://thoughttechnology.com/
index.php/complete-systems/biofeedback-advanced-system.html
33
3 – Infinite Runner
7. Participants were asked to play four sessions, with each single one lasting 2
minutes. A repeated measures design was employed with two levels for each
factor, with our factors being Haptic_Audio or NoHaptic_Audio. The order
with which factors were presented to each participant was randomly chosen so
as to minimize any learning curve effect.
8. Between each session, participants were asked to rest for one minute and com-
plete a post-session questionnaire.
9. After playing all four game sessions, participants were asked to complete the
post-test questionnaire.
Subjects
Eight male subjects between the ages of 19 and 28 took part in the experiment. All
participants reported to have previously played to an endless running game; three
participants (No. 1, 3 and 7) stated that they play video games for 0-5 hours/week,
three other (No. 4, 5 and 8) for 5-10 hours/week and the last two (No. 2 and 6) for
10-15 hours/week. Six of them said to use the computer as their preferred video game
platform, while the other two (No. 4 and 8) said to prefer home video game consoles
such as PlayStation 3 (PS3), PS4 and Nintendo Wii. Since the experiment was fairly
brief and involved play of a simple yet fun game, no monetary compensation was
given to any participant.
Although it was not originally planned, participants were divided into two differ-
ent groups: one group consisting of participant No. 1, 2 and 3, and the second one
consisting of the remaining five participants. This decision was taken in response to
the comments made by the first three participants, who complained about the fact
that in order to collect coins and avoid obstacles it was faster and easier to just move
their heads instead of walking around the CAVE; in addition, they also observed
that the vibrations generated from the floor were too subtle. For these reasons, for
all other participants the following changes were made:
• Disable audio during haptic sessions so as to have Haptic_NoAudio or No-
Haptic_Audio for the repeated measures design.
• Increase in the intensity of the haptic feedback when the user hits an obstacle.
• Track of the player’s body movements, not just those of his head. This was
achieved by asking users to put a series of markers on the band around the
abdomen used to collect their respiration rate. As a result, the markers on the
head are used only for perspective correction purposes.
34
3 – Infinite Runner
Figure 3.15: Average collected coins and hit oblstacle rates for each participant,
divided for haptic and audio sessions
3.4.2 Results
The average rate of collected coins and hit obstacles divided for haptic and audio
sessions is summarized in Figure 3.15. Among all the sessions with haptic feedback,
the highest achieved rate was 94.09%, while the lowest was 70.31%. As for the
sessions with audio, the highest rate was 95.52% and the lowest was 76.20%. The
average over all participants and all haptic sessions was 82.29%, with a standard
deviation of 7.35%, while for the sessions with audio the rate was 85.88%, with a
standard deviation of 6.57%. Regarding the hit obstacles, the highest registered rate
35
3 – Infinite Runner
Figure 3.16: Results of the post-session questionnaire for Group#1 (top) and
Group#2 (down)
for haptic sessions was 14.21%, while the lowest was 0%. Considering instead the
audio sessions, the highest rate was 15.26% and the lowest was 2.26%. The overall
averages among all participants were equal to 7.25% (haptic sessions) and 6.91%
(audio sessions), with a standard deviation of 4.34% and 4.32%, respectively.
Figure 3.16 shows the results of the post-session questionnaire for both group #1
and group #2. Participants in group #1 reported having performed better than the
ones in group #2, with a greater preference for sessions played with haptic feedback.
Group #1 found the game to be less challenging than what was said by the group
#2. Anyway, as we just saw from the in-game data both groups have similar results
while playing either with or without haptic; in other words, performances were not
effected by the modalities with which feedback was provided. Regarding the third
question, it is interesting to note how group #1 has showed a slight preference for
the Haptic_Audio sessions than the ones played with audio only, while participants
36
3 – Infinite Runner
Figure 3.17: Results of the post-test questionnaire
in group #2 have much preferred sessions with audio only than the ones with just
haptic. From this result we can assume that haptic is a nice addition for enhancing
the overall experience, but audio is much more important.
Finally, the results related to the post-test questionnaire are depicted in Fig-
ure 3.17. First, it is interesting to note that, contrary to what came out from the
previous questionnaire analysis, group #2 preferred more than group #1 the addi-
tion of haptic effects to the game play. Not only, participants of group #2 also stated
that the haptic feedback helped them in collecting coins and avoiding obstacles to
a greater extent than that perceived by participants of group #1 (although the an-
swers to these two questions have not been very satisfactory for both groups). This
difference can be in part attributed to the fact that group #2 received a feedback of
greater intensity whenever an obstacle was hit, variation that apparently was well
appreciated.
One of the most interesting details that popped up from the questionnaire is that
group #1 has been the one to favor more the tracking system, which was modified
for participants in group #2 after welcoming the received complaints. We are not
sure of the reasons behind this result, as we would have expected that group #2
would have been the one to prefer the system; the only certain thing is that not all
users had the same conception of what it meant to have the possibility to move freely
within a virtual world. For example, some participants tried to avoid obstacles by
jumping over them, others tried to collect the coins using their arms, others instead
were satisfied to just move left and right as instructed. We should also admit that
eight participants are not sufficient at all to draw any comprehensive conclusion;
37
3 – Infinite Runner
however, it would be interesting to investigate the matter in-depth in next phase of
the experiment.
Seven participants stated that the overall experience was not stressful at all;
only participant #4 asserted that it was very stressful, and this is why the standard
deviation associated to this question for group #2 is really big. Since all the other
users answered in the same manner, one plausible explanation of this divergence is
that he misunderstood the meaning of the question.
All participants felt immersed in the game play, mainly thanks to the potential-
ity offered by the CAVE environment. They all said that they would be willing to
play again if invited; participants of group #2 were the ones with the most positive
answer to this question.
As regards the physiological data, their analysis has not led to interesting con-
clusions. This is mainly due to the fact that such data were found to be subject to
considerable noise due to the physical movements performed by participants. With
regard to video recordings, it was noted that participants had different interpreta-
tions on what meant to have the freedom to move freely within the CAVE envi-
ronment: someone merely moved to the left or to the right with his arms stretched
along the body, a participant tried to jump to avoid obstacles although he was told
it was not possible to do so, another one even tried to collect coins with his hands.
38
Chapter 4
MINIW
The vibrotactile floor, combined with an immersive environment such as the CAVE,
enables the development of multimodal video games in all a new way. As demon-
strated in the previous chapter, it is possible to use the haptic floor not only to give
the user a haptic feedback as a consequence of an event happened in the game, but
it is also possible to use it as an interface in order to allow the user to interact with
the virtual world.
However, the environment used to develop such an experience has the big draw-
back to be really expensive and to occupy a lot of space. As a consequence, it is
practically impossible for a normal user to use this technology directly in his home.
Motivated by the idea of providing a haptic experience using tools more accessible
than a CAVE, we decided to exploit the knowledge acquired and to apply it on a
2×2 tile floor platform named MINIW.
The objective of our work was to develop something to introduce the general
public to the haptic floor technology. So far, people who were interested to try
the potentialities offered by the system had to be directly invited in our laboratory
to see the projects developed using the haptic floor contained in the CAVE-like
environment, as no one had developed anything using MINIW. We demonstrated
our work during two big events: the first one was TEDxMontreal1
on November
7th
in Montreal, and the second one was Maker Faire Ottawa2
on November 8th
in
Ottawa.
Two experiences were created using MINIW. Their development has required the
solution of some intrinsic problems associated with this platform:
1. MINIW has limited dimensions, making too dangerous for a user to walk on
it.
1
http://tedxmontreal.com/en/
2
http://makerfaireottawa.com/
39
4 – MINIW
2. Tiles are made of plexiglass, and for this reason it is not possible to project
anything on them. Even if they were opaque, a projector mount would be
needed, which is cumbersome for exhibitions.
Due to these limitations, it’s not correct to think of MINIW just as a small
version of the haptic floor homed within the CAVE. In the latter, the user can freely
move on it and he is totally conscious of his position. Moreover, the haptic feedback
generated by each tile can be changed according to the environment that is currently
projected on it.
In order to show the features offered by MINIW, two projects were developed:
• "Magic Tiles": it allows us to demonstrate different haptic textures without
the need of projecting anything on the floor. In particular, haptic textures
have been associated to the colors of physical tiles. There are four foam tiles
with different colors, and each tile has an aluminum tape on the back forming
a pattern. When a foam tile is aligned on a haptic tile, this tape shorts 2 of the
4 electrodes placed on each plexiglass tile. A microcontroller keeps monitoring
all the electrodes identifying any changes, so that is is possible to select the
desired haptic texture for each tile.
• "Wakaduck": it is a video game inspired by a duck shooting game, but has
a unique control. A virtual can with a spring attached is placed on MINIW,
and a user steps on the can, aim at a duck by controlling the pressure and
direction, and release to shoot a can.
A detailed description about those two projects can be found in the next sections.
4.1 System Description
As depicted in Figure 4.1, the system is composed of three macro components:
MINIW and two computers (Mac Mini and PC).
MINIW is the first implemented prototype of the haptic floor. It consists of
three main elements:
1. Arduino (FSRs): unlike the NIW, which uses a Gluion per row to read FSR
data, a single microcontroller is responsible to receive the data coming from
all the 16 FSRs sensors placed under each tile. All the data is then sent to
the server placed inside Mac Mini by using an USB serial connection. This
operation is done 30 times/s.
2. Arduino (Electrodes): This microcontroller, detects connection of the elec-
trodes located above each tile, and send it to the server by using another USB
serial connection. Details can be found in Section 4.2.1.
40
4 – MINIW
Figure 4.1: System architecture
3. Actuators: an actuator is attached to each tile. They are used to generate the
haptic feedback. It is the server that decides which haptic texture should be
rendered according to the information provided by the two microcontrollers
described just above.
The data coming from the two microcontrollers is received and processed by a
server code written in C++ hosted on a Mac Mini computer.
The server code consists of three different threads, each of which with a different
task: the first two are in charge of receiving data from the two microcontrollers,
while the third one processes it. This last thread is the one that defines the texture
feedback that each tile should have based on the information received from the
Arduino connected to the electrodes, and the one that contains the logic on how to
interpret the FSR values in order to allow MINIW to be used as an interface to play
Wakaduck.
Once that the textures are defined for all the tiles, the server sends an OSC
message to the Max/MSP patch that will synthesize the desired feedbacks anytime
someone will step on a tile. OSC messages are also sent to a client computer running
Unity in order to notify it that something is happening on the floor. Based on that
information the game status will be updated accordingly. A full description of how
the server talks together with the client is given in Section 4.3.3.
4.2 Magic Tiles
To augment a floor or to synthesize a virtual floor, a display is preferred in order
to give the user both visual and haptic feedback. For example, projection, an LCD
screen, or a head-mounted display can be used, and both visual and haptic textures
41
4 – MINIW
of the floor can be dynamically changed with time. However, all these solutions
present some issues:
• Projection can be occluded by participant’s body standing on the floor.
• It is difficult to design an LCD display that can be stepped on, without con-
sidering the fact that such displays are really expensive.
• As explained in the introduction of this Chapter, it is very dangerous to use
head-mounted displays such as the Oculus Rift since the floor is really small
and the user may fall down.
In order to avoid these problems, our goal was to introduce another layer on top
of the plexiglass tiles that could somehow represent the haptic texture.
The first idea was to use a smartphone in order to create an augmented reality
experience. The user, standing on MINIW, would have been able to see the virtual
texture associated with a tile just placing the phone camera on it. So, for example, if
a tile was assigned the ice texture, the user would have seen an ice texture associated
for that tile; by stepping on it, the user would have felt the ice haptic feedback and
seen the ice texture cracking on the phone. Moreover, the user would have been
able to dynamically change the visual feedback (and consequently the haptic one)
by using the smartphone.
However, after running some preliminary tests we noticed that there was not a strong
connection between visual and haptic feedback. The user was mainly concentrated
in looking at the phone instead of feeling the haptic feedback.
Another idea was to create some drawings using some sheets of paper to put on
top of the tiles. On each sheet there would have been drawn an element that would
have allowed the user to associate that drawing to a particular texture. Following
the example above, for the ice texture there would have been a drawing with an ice
cube drawn on it. The user would have been able to place these sheets at will on
the tiles, so that to create his own personal haptic floor. Using a Kinect and some
image analysis techniques, it would have been possible to identify the item drawn
on a particular sheet in order to select the appropriate tactile feedback for the tile
underneath it.
This solution, however, has the big problem that the user would have not been able
to step on the drawings placed on top of the tiles as they would have been ruined.
At most, the user would have been able to feel the haptic feedback by pressing on
a tile using his hands. This however would not have made much sense, since the
objective was to create a haptic floor, and not a haptic surface.
From here it was born the idea of replacing the paper sheets with something
which the user could step on, while maintaining the idea of having a distinctive
element so as to be able to distinguish the object placed on top of each tile . We
42
4 – MINIW
a) Foam Tiles b) MINIW with foam tiles
Figure 4.2: MINIW
thought then of using the classic colored foam tiles (Figure 4.2) with which toddlers
like to play.
With this approach, the differentiating element between the foam tiles is the color
of the tiles themselves, and not something that is drawn on top of them. Moreover,
the user is able to place these foam tiles on top of the plexiglass ones, with the
freedom to step on them. Kinect is still used in order to monitor the haptic floor
and adapt the haptic feedback for each tile based on the foam tile placed by the
user on top of it.
By swapping interlocking foam tiles with different colors, haptic texture changes
accordingly. The user can choose between red, blue, light blue and yellow, and
they represent crushing can, water, ice and sand textures, respectively. By actively
swapping the tiles, users are expected to recognize the change in haptic feedback.
This solution, however, present a problem, that is where to place the Kinect in
order to correctly detect the colors of the different foam tiles placed on the haptic
floor. Moreover, it is important to keep into account that with the user staying
on top of the floor, it is hard to recognize the foam tiles’ colors due to occlusion
problems.
The first idea to solve this problem was to place the Kinect underneath the
plexiglass tiles. This solution, however, was unfeasible since there is no room inside
MINIW to place a Kinect: most of the space is occupied by the actuators and all
the wires.
We decided then to substitute the Kinect with a normal webcam, that could have
been easily fitted between all the actuators. But also this solution revealed to be
unachievable: inside MINIW it is really dark, and illumination play an important
role when applying image analysis techniques. Moreover, the field of view of most
of the webcams is not so wide to allow to monitor all the four tiles at once. It
would have been necessary therefore to add some sort of illumination, and use many
43
Enhancing video game experience through a vibrotactile floor
Enhancing video game experience through a vibrotactile floor
Enhancing video game experience through a vibrotactile floor
Enhancing video game experience through a vibrotactile floor
Enhancing video game experience through a vibrotactile floor
Enhancing video game experience through a vibrotactile floor
Enhancing video game experience through a vibrotactile floor
Enhancing video game experience through a vibrotactile floor
Enhancing video game experience through a vibrotactile floor
Enhancing video game experience through a vibrotactile floor
Enhancing video game experience through a vibrotactile floor
Enhancing video game experience through a vibrotactile floor
Enhancing video game experience through a vibrotactile floor
Enhancing video game experience through a vibrotactile floor
Enhancing video game experience through a vibrotactile floor
Enhancing video game experience through a vibrotactile floor
Enhancing video game experience through a vibrotactile floor
Enhancing video game experience through a vibrotactile floor
Enhancing video game experience through a vibrotactile floor
Enhancing video game experience through a vibrotactile floor
Enhancing video game experience through a vibrotactile floor
Enhancing video game experience through a vibrotactile floor
Enhancing video game experience through a vibrotactile floor
Enhancing video game experience through a vibrotactile floor
Enhancing video game experience through a vibrotactile floor
Enhancing video game experience through a vibrotactile floor
Enhancing video game experience through a vibrotactile floor
Enhancing video game experience through a vibrotactile floor
Enhancing video game experience through a vibrotactile floor
Enhancing video game experience through a vibrotactile floor

More Related Content

What's hot

Steganography final report
Steganography final reportSteganography final report
Steganography final reportABHIJEET KHIRE
 
Ns dsc1110 a-10-0197_v2_web_english_final_lr
Ns dsc1110 a-10-0197_v2_web_english_final_lrNs dsc1110 a-10-0197_v2_web_english_final_lr
Ns dsc1110 a-10-0197_v2_web_english_final_lremteemt
 
Casio EXILIM EX-ZR200 Manual
Casio EXILIM EX-ZR200 ManualCasio EXILIM EX-ZR200 Manual
Casio EXILIM EX-ZR200 ManualIBBuy
 
S6_Guide_80752
S6_Guide_80752S6_Guide_80752
S6_Guide_80752Rob Wenig
 
Magic lanternUser guide
Magic lanternUser guideMagic lanternUser guide
Magic lanternUser guideDarin Cohen
 
Aspire 2930
Aspire 2930Aspire 2930
Aspire 2930Yan Bali
 
Boss gnome user-manual
Boss gnome user-manualBoss gnome user-manual
Boss gnome user-manualTamojyoti Bose
 
World of tanks game manual
World of tanks game manualWorld of tanks game manual
World of tanks game manualesoryuri
 
MANUAL
MANUALMANUAL
MANUALceviof
 
3D Content for Dream-like VR
3D Content for Dream-like VR3D Content for Dream-like VR
3D Content for Dream-like VRRoland Bruggmann
 
Doc Iomega manual v2
Doc Iomega manual v2Doc Iomega manual v2
Doc Iomega manual v2mourad ouzzat
 
Manual aspire 1810t 1410
Manual aspire 1810t 1410Manual aspire 1810t 1410
Manual aspire 1810t 1410Jose Silva
 

What's hot (18)

Steganography final report
Steganography final reportSteganography final report
Steganography final report
 
Ns dsc1110 a-10-0197_v2_web_english_final_lr
Ns dsc1110 a-10-0197_v2_web_english_final_lrNs dsc1110 a-10-0197_v2_web_english_final_lr
Ns dsc1110 a-10-0197_v2_web_english_final_lr
 
Casio EXILIM EX-ZR200 Manual
Casio EXILIM EX-ZR200 ManualCasio EXILIM EX-ZR200 Manual
Casio EXILIM EX-ZR200 Manual
 
S6_Guide_80752
S6_Guide_80752S6_Guide_80752
S6_Guide_80752
 
Magic lanternUser guide
Magic lanternUser guideMagic lanternUser guide
Magic lanternUser guide
 
Exz80 e
Exz80 eExz80 e
Exz80 e
 
Aspire 2930
Aspire 2930Aspire 2930
Aspire 2930
 
Boss gnome user-manual
Boss gnome user-manualBoss gnome user-manual
Boss gnome user-manual
 
Pssd1300 is ixus105_guide_en
Pssd1300 is ixus105_guide_enPssd1300 is ixus105_guide_en
Pssd1300 is ixus105_guide_en
 
World of tanks game manual
World of tanks game manualWorld of tanks game manual
World of tanks game manual
 
MANUAL
MANUALMANUAL
MANUAL
 
Manual Fischertechnik Designer
Manual Fischertechnik DesignerManual Fischertechnik Designer
Manual Fischertechnik Designer
 
Ubuntu manual
Ubuntu manualUbuntu manual
Ubuntu manual
 
3D Content for Dream-like VR
3D Content for Dream-like VR3D Content for Dream-like VR
3D Content for Dream-like VR
 
Doc Iomega manual v2
Doc Iomega manual v2Doc Iomega manual v2
Doc Iomega manual v2
 
GI25_TutorialGuide
GI25_TutorialGuideGI25_TutorialGuide
GI25_TutorialGuide
 
Scikit learn 0.16.0 user guide
Scikit learn 0.16.0 user guideScikit learn 0.16.0 user guide
Scikit learn 0.16.0 user guide
 
Manual aspire 1810t 1410
Manual aspire 1810t 1410Manual aspire 1810t 1410
Manual aspire 1810t 1410
 

Similar to Enhancing video game experience through a vibrotactile floor

Particle Filter Localization for Unmanned Aerial Vehicles Using Augmented Rea...
Particle Filter Localization for Unmanned Aerial Vehicles Using Augmented Rea...Particle Filter Localization for Unmanned Aerial Vehicles Using Augmented Rea...
Particle Filter Localization for Unmanned Aerial Vehicles Using Augmented Rea...Ed Kelley
 
Educational Game Design Thesis
Educational Game Design ThesisEducational Game Design Thesis
Educational Game Design ThesisCory Buckley
 
2017theseDupontM.pdf
2017theseDupontM.pdf2017theseDupontM.pdf
2017theseDupontM.pdfSouha Bennani
 
Task-Based Automatic Camera Placement
Task-Based Automatic Camera PlacementTask-Based Automatic Camera Placement
Task-Based Automatic Camera PlacementMustafa Kabak
 
Bachelor Thesis .pdf (2010)
Bachelor Thesis .pdf (2010)Bachelor Thesis .pdf (2010)
Bachelor Thesis .pdf (2010)Dimitar Dimitrov
 
Axiom Computer Algebra System Tutorial.pdf
Axiom Computer Algebra System Tutorial.pdfAxiom Computer Algebra System Tutorial.pdf
Axiom Computer Algebra System Tutorial.pdfCarrie Tran
 
Python Playground_ Geeky Projects for the Curious Programmer ( PDFDrive ).pdf
Python Playground_ Geeky Projects for the Curious Programmer ( PDFDrive ).pdfPython Playground_ Geeky Projects for the Curious Programmer ( PDFDrive ).pdf
Python Playground_ Geeky Projects for the Curious Programmer ( PDFDrive ).pdfNganPham667083
 
LATENT FINGERPRINT MATCHING USING AUTOMATED FINGERPRINT IDENTIFICATION SYSTEM
LATENT FINGERPRINT MATCHING USING AUTOMATED FINGERPRINT IDENTIFICATION SYSTEMLATENT FINGERPRINT MATCHING USING AUTOMATED FINGERPRINT IDENTIFICATION SYSTEM
LATENT FINGERPRINT MATCHING USING AUTOMATED FINGERPRINT IDENTIFICATION SYSTEMManish Negi
 
Designing Countermeasures For Tomorrows Threats : Documentation
Designing Countermeasures For Tomorrows Threats : DocumentationDesigning Countermeasures For Tomorrows Threats : Documentation
Designing Countermeasures For Tomorrows Threats : DocumentationDarwish Ahmad
 
Project report on Eye tracking interpretation system
Project report on Eye tracking interpretation systemProject report on Eye tracking interpretation system
Project report on Eye tracking interpretation systemkurkute1994
 
Bast digital Marketing angency in shivagghan soraon prayagraj 212502
Bast digital Marketing angency in shivagghan soraon prayagraj 212502Bast digital Marketing angency in shivagghan soraon prayagraj 212502
Bast digital Marketing angency in shivagghan soraon prayagraj 212502digigreatidea2024
 
The Production Process of a Video Campaign for The UL Vikings Club.
The Production Process of a Video Campaign for The UL Vikings Club.The Production Process of a Video Campaign for The UL Vikings Club.
The Production Process of a Video Campaign for The UL Vikings Club.Killian Vigna
 

Similar to Enhancing video game experience through a vibrotactile floor (20)

Particle Filter Localization for Unmanned Aerial Vehicles Using Augmented Rea...
Particle Filter Localization for Unmanned Aerial Vehicles Using Augmented Rea...Particle Filter Localization for Unmanned Aerial Vehicles Using Augmented Rea...
Particle Filter Localization for Unmanned Aerial Vehicles Using Augmented Rea...
 
wronski_ugthesis[1]
wronski_ugthesis[1]wronski_ugthesis[1]
wronski_ugthesis[1]
 
Educational Game Design Thesis
Educational Game Design ThesisEducational Game Design Thesis
Educational Game Design Thesis
 
thesis
thesisthesis
thesis
 
2017theseDupontM.pdf
2017theseDupontM.pdf2017theseDupontM.pdf
2017theseDupontM.pdf
 
Project Dissertation
Project DissertationProject Dissertation
Project Dissertation
 
Master_Thesis
Master_ThesisMaster_Thesis
Master_Thesis
 
hardback
hardbackhardback
hardback
 
intelligent Chess game
intelligent Chess gameintelligent Chess game
intelligent Chess game
 
Task-Based Automatic Camera Placement
Task-Based Automatic Camera PlacementTask-Based Automatic Camera Placement
Task-Based Automatic Camera Placement
 
Tutorial
TutorialTutorial
Tutorial
 
Bachelor Thesis .pdf (2010)
Bachelor Thesis .pdf (2010)Bachelor Thesis .pdf (2010)
Bachelor Thesis .pdf (2010)
 
Axiom Computer Algebra System Tutorial.pdf
Axiom Computer Algebra System Tutorial.pdfAxiom Computer Algebra System Tutorial.pdf
Axiom Computer Algebra System Tutorial.pdf
 
Python Playground_ Geeky Projects for the Curious Programmer ( PDFDrive ).pdf
Python Playground_ Geeky Projects for the Curious Programmer ( PDFDrive ).pdfPython Playground_ Geeky Projects for the Curious Programmer ( PDFDrive ).pdf
Python Playground_ Geeky Projects for the Curious Programmer ( PDFDrive ).pdf
 
LATENT FINGERPRINT MATCHING USING AUTOMATED FINGERPRINT IDENTIFICATION SYSTEM
LATENT FINGERPRINT MATCHING USING AUTOMATED FINGERPRINT IDENTIFICATION SYSTEMLATENT FINGERPRINT MATCHING USING AUTOMATED FINGERPRINT IDENTIFICATION SYSTEM
LATENT FINGERPRINT MATCHING USING AUTOMATED FINGERPRINT IDENTIFICATION SYSTEM
 
z_remy_spaan
z_remy_spaanz_remy_spaan
z_remy_spaan
 
Designing Countermeasures For Tomorrows Threats : Documentation
Designing Countermeasures For Tomorrows Threats : DocumentationDesigning Countermeasures For Tomorrows Threats : Documentation
Designing Countermeasures For Tomorrows Threats : Documentation
 
Project report on Eye tracking interpretation system
Project report on Eye tracking interpretation systemProject report on Eye tracking interpretation system
Project report on Eye tracking interpretation system
 
Bast digital Marketing angency in shivagghan soraon prayagraj 212502
Bast digital Marketing angency in shivagghan soraon prayagraj 212502Bast digital Marketing angency in shivagghan soraon prayagraj 212502
Bast digital Marketing angency in shivagghan soraon prayagraj 212502
 
The Production Process of a Video Campaign for The UL Vikings Club.
The Production Process of a Video Campaign for The UL Vikings Club.The Production Process of a Video Campaign for The UL Vikings Club.
The Production Process of a Video Campaign for The UL Vikings Club.
 

Recently uploaded

Recruitment Management Software Benefits (Infographic)
Recruitment Management Software Benefits (Infographic)Recruitment Management Software Benefits (Infographic)
Recruitment Management Software Benefits (Infographic)Hr365.us smith
 
EY_Graph Database Powered Sustainability
EY_Graph Database Powered SustainabilityEY_Graph Database Powered Sustainability
EY_Graph Database Powered SustainabilityNeo4j
 
Automate your Kamailio Test Calls - Kamailio World 2024
Automate your Kamailio Test Calls - Kamailio World 2024Automate your Kamailio Test Calls - Kamailio World 2024
Automate your Kamailio Test Calls - Kamailio World 2024Andreas Granig
 
Call Us🔝>༒+91-9711147426⇛Call In girls karol bagh (Delhi)
Call Us🔝>༒+91-9711147426⇛Call In girls karol bagh (Delhi)Call Us🔝>༒+91-9711147426⇛Call In girls karol bagh (Delhi)
Call Us🔝>༒+91-9711147426⇛Call In girls karol bagh (Delhi)jennyeacort
 
Asset Management Software - Infographic
Asset Management Software - InfographicAsset Management Software - Infographic
Asset Management Software - InfographicHr365.us smith
 
办理学位证(UQ文凭证书)昆士兰大学毕业证成绩单原版一模一样
办理学位证(UQ文凭证书)昆士兰大学毕业证成绩单原版一模一样办理学位证(UQ文凭证书)昆士兰大学毕业证成绩单原版一模一样
办理学位证(UQ文凭证书)昆士兰大学毕业证成绩单原版一模一样umasea
 
KnowAPIs-UnknownPerf-jaxMainz-2024 (1).pptx
KnowAPIs-UnknownPerf-jaxMainz-2024 (1).pptxKnowAPIs-UnknownPerf-jaxMainz-2024 (1).pptx
KnowAPIs-UnknownPerf-jaxMainz-2024 (1).pptxTier1 app
 
Building Real-Time Data Pipelines: Stream & Batch Processing workshop Slide
Building Real-Time Data Pipelines: Stream & Batch Processing workshop SlideBuilding Real-Time Data Pipelines: Stream & Batch Processing workshop Slide
Building Real-Time Data Pipelines: Stream & Batch Processing workshop SlideChristina Lin
 
Der Spagat zwischen BIAS und FAIRNESS (2024)
Der Spagat zwischen BIAS und FAIRNESS (2024)Der Spagat zwischen BIAS und FAIRNESS (2024)
Der Spagat zwischen BIAS und FAIRNESS (2024)OPEN KNOWLEDGE GmbH
 
React Server Component in Next.js by Hanief Utama
React Server Component in Next.js by Hanief UtamaReact Server Component in Next.js by Hanief Utama
React Server Component in Next.js by Hanief UtamaHanief Utama
 
GOING AOT WITH GRAALVM – DEVOXX GREECE.pdf
GOING AOT WITH GRAALVM – DEVOXX GREECE.pdfGOING AOT WITH GRAALVM – DEVOXX GREECE.pdf
GOING AOT WITH GRAALVM – DEVOXX GREECE.pdfAlina Yurenko
 
Advancing Engineering with AI through the Next Generation of Strategic Projec...
Advancing Engineering with AI through the Next Generation of Strategic Projec...Advancing Engineering with AI through the Next Generation of Strategic Projec...
Advancing Engineering with AI through the Next Generation of Strategic Projec...OnePlan Solutions
 
Professional Resume Template for Software Developers
Professional Resume Template for Software DevelopersProfessional Resume Template for Software Developers
Professional Resume Template for Software DevelopersVinodh Ram
 
Unveiling the Future: Sylius 2.0 New Features
Unveiling the Future: Sylius 2.0 New FeaturesUnveiling the Future: Sylius 2.0 New Features
Unveiling the Future: Sylius 2.0 New FeaturesŁukasz Chruściel
 
Adobe Marketo Engage Deep Dives: Using Webhooks to Transfer Data
Adobe Marketo Engage Deep Dives: Using Webhooks to Transfer DataAdobe Marketo Engage Deep Dives: Using Webhooks to Transfer Data
Adobe Marketo Engage Deep Dives: Using Webhooks to Transfer DataBradBedford3
 
Cloud Management Software Platforms: OpenStack
Cloud Management Software Platforms: OpenStackCloud Management Software Platforms: OpenStack
Cloud Management Software Platforms: OpenStackVICTOR MAESTRE RAMIREZ
 
BATTLEFIELD ORM: TIPS, TACTICS AND STRATEGIES FOR CONQUERING YOUR DATABASE
BATTLEFIELD ORM: TIPS, TACTICS AND STRATEGIES FOR CONQUERING YOUR DATABASEBATTLEFIELD ORM: TIPS, TACTICS AND STRATEGIES FOR CONQUERING YOUR DATABASE
BATTLEFIELD ORM: TIPS, TACTICS AND STRATEGIES FOR CONQUERING YOUR DATABASEOrtus Solutions, Corp
 
Dealing with Cultural Dispersion — Stefano Lambiase — ICSE-SEIS 2024
Dealing with Cultural Dispersion — Stefano Lambiase — ICSE-SEIS 2024Dealing with Cultural Dispersion — Stefano Lambiase — ICSE-SEIS 2024
Dealing with Cultural Dispersion — Stefano Lambiase — ICSE-SEIS 2024StefanoLambiase
 
Salesforce Certified Field Service Consultant
Salesforce Certified Field Service ConsultantSalesforce Certified Field Service Consultant
Salesforce Certified Field Service ConsultantAxelRicardoTrocheRiq
 
MYjobs Presentation Django-based project
MYjobs Presentation Django-based projectMYjobs Presentation Django-based project
MYjobs Presentation Django-based projectAnoyGreter
 

Recently uploaded (20)

Recruitment Management Software Benefits (Infographic)
Recruitment Management Software Benefits (Infographic)Recruitment Management Software Benefits (Infographic)
Recruitment Management Software Benefits (Infographic)
 
EY_Graph Database Powered Sustainability
EY_Graph Database Powered SustainabilityEY_Graph Database Powered Sustainability
EY_Graph Database Powered Sustainability
 
Automate your Kamailio Test Calls - Kamailio World 2024
Automate your Kamailio Test Calls - Kamailio World 2024Automate your Kamailio Test Calls - Kamailio World 2024
Automate your Kamailio Test Calls - Kamailio World 2024
 
Call Us🔝>༒+91-9711147426⇛Call In girls karol bagh (Delhi)
Call Us🔝>༒+91-9711147426⇛Call In girls karol bagh (Delhi)Call Us🔝>༒+91-9711147426⇛Call In girls karol bagh (Delhi)
Call Us🔝>༒+91-9711147426⇛Call In girls karol bagh (Delhi)
 
Asset Management Software - Infographic
Asset Management Software - InfographicAsset Management Software - Infographic
Asset Management Software - Infographic
 
办理学位证(UQ文凭证书)昆士兰大学毕业证成绩单原版一模一样
办理学位证(UQ文凭证书)昆士兰大学毕业证成绩单原版一模一样办理学位证(UQ文凭证书)昆士兰大学毕业证成绩单原版一模一样
办理学位证(UQ文凭证书)昆士兰大学毕业证成绩单原版一模一样
 
KnowAPIs-UnknownPerf-jaxMainz-2024 (1).pptx
KnowAPIs-UnknownPerf-jaxMainz-2024 (1).pptxKnowAPIs-UnknownPerf-jaxMainz-2024 (1).pptx
KnowAPIs-UnknownPerf-jaxMainz-2024 (1).pptx
 
Building Real-Time Data Pipelines: Stream & Batch Processing workshop Slide
Building Real-Time Data Pipelines: Stream & Batch Processing workshop SlideBuilding Real-Time Data Pipelines: Stream & Batch Processing workshop Slide
Building Real-Time Data Pipelines: Stream & Batch Processing workshop Slide
 
Der Spagat zwischen BIAS und FAIRNESS (2024)
Der Spagat zwischen BIAS und FAIRNESS (2024)Der Spagat zwischen BIAS und FAIRNESS (2024)
Der Spagat zwischen BIAS und FAIRNESS (2024)
 
React Server Component in Next.js by Hanief Utama
React Server Component in Next.js by Hanief UtamaReact Server Component in Next.js by Hanief Utama
React Server Component in Next.js by Hanief Utama
 
GOING AOT WITH GRAALVM – DEVOXX GREECE.pdf
GOING AOT WITH GRAALVM – DEVOXX GREECE.pdfGOING AOT WITH GRAALVM – DEVOXX GREECE.pdf
GOING AOT WITH GRAALVM – DEVOXX GREECE.pdf
 
Advancing Engineering with AI through the Next Generation of Strategic Projec...
Advancing Engineering with AI through the Next Generation of Strategic Projec...Advancing Engineering with AI through the Next Generation of Strategic Projec...
Advancing Engineering with AI through the Next Generation of Strategic Projec...
 
Professional Resume Template for Software Developers
Professional Resume Template for Software DevelopersProfessional Resume Template for Software Developers
Professional Resume Template for Software Developers
 
Unveiling the Future: Sylius 2.0 New Features
Unveiling the Future: Sylius 2.0 New FeaturesUnveiling the Future: Sylius 2.0 New Features
Unveiling the Future: Sylius 2.0 New Features
 
Adobe Marketo Engage Deep Dives: Using Webhooks to Transfer Data
Adobe Marketo Engage Deep Dives: Using Webhooks to Transfer DataAdobe Marketo Engage Deep Dives: Using Webhooks to Transfer Data
Adobe Marketo Engage Deep Dives: Using Webhooks to Transfer Data
 
Cloud Management Software Platforms: OpenStack
Cloud Management Software Platforms: OpenStackCloud Management Software Platforms: OpenStack
Cloud Management Software Platforms: OpenStack
 
BATTLEFIELD ORM: TIPS, TACTICS AND STRATEGIES FOR CONQUERING YOUR DATABASE
BATTLEFIELD ORM: TIPS, TACTICS AND STRATEGIES FOR CONQUERING YOUR DATABASEBATTLEFIELD ORM: TIPS, TACTICS AND STRATEGIES FOR CONQUERING YOUR DATABASE
BATTLEFIELD ORM: TIPS, TACTICS AND STRATEGIES FOR CONQUERING YOUR DATABASE
 
Dealing with Cultural Dispersion — Stefano Lambiase — ICSE-SEIS 2024
Dealing with Cultural Dispersion — Stefano Lambiase — ICSE-SEIS 2024Dealing with Cultural Dispersion — Stefano Lambiase — ICSE-SEIS 2024
Dealing with Cultural Dispersion — Stefano Lambiase — ICSE-SEIS 2024
 
Salesforce Certified Field Service Consultant
Salesforce Certified Field Service ConsultantSalesforce Certified Field Service Consultant
Salesforce Certified Field Service Consultant
 
MYjobs Presentation Django-based project
MYjobs Presentation Django-based projectMYjobs Presentation Django-based project
MYjobs Presentation Django-based project
 

Enhancing video game experience through a vibrotactile floor

  • 1. POLITECNICO DI TORINO Collegio di Ingegneria Informatica, del Cinema e Meccatronica Master of Science in Computer Engineering Master Degree Thesis Enhancing video game experience through a vibrotactile floor Supervisor Prof. Andrea Giuseppe Bottino Candidate Nicola Gallo Student no. 206830 External Supervisor McGill University - Shared Reality Lab Prof. Jeremy R. Cooperstock March 2016
  • 3. Abstract When it comes to Virtual Reality, the whole idea is to initiate the feeling of Pres- ence, the perception of actually being within a virtual world. When discrepancies happen between what your brain expects and what it actually feels, this feeling can be broken. This generates a sense of disappointment along with a sensation of being disassociated from the virtual environment. In order to deceive your mind and give it the illusion that your body is somewhere different than what your eyes are seeing, all five human senses should perceive the digital environment to be physically real. While tricking the sense of smell and taste is not so common in the video game world, the sense of touch has been attracting the attention of a growing number of companies all around the world. However, when the pedestrian movement is in- volved, it is not so clear how to provide a compelling haptic feedback as you would expect to receive it directly under your feet. This project aims to solve this problem by taking advantage of the potentialities offered by a vibrotactile floor. Two VR experiences have been developed: Infinite Runner, in which the floor was employed for the generation of particular haptic effects as a response to specific game events, and Wakaduck, in which it was tried to use the haptic feedback not only to enhance the user experience, but also and above all to provide some haptic cues whose understanding is essential to correctly play the game. iii
  • 4. Acknowledgements First and foremost, I would like to thank my Thesis Supervisor Prof. Bottino, for his great patience and inestimable remote support, showing a deep interest in the research topic carried out. My most truthful gratitude goes to Prof. Cooperstock, my Supervisor at McGill University, for allowing me to be part of his Research Group and assisting me throughout the entire thesis work, and for deeply believing in my potential. My deepest gratitude goes also to my friend and colleague Naoto Hieda, without whose constant support it would have been impossible for me to achieve my research ob- jectives. A special thanks goes to all my mates at Shared Reality Lab, that made my months in Montreal unforgettable. Finally, I would like to warmly thank my family and all my closest friends for the immense support provided also and especially in the most difficult times. Your love has been and will always be a landmark for me. NG iv
  • 5. Contents Abstract iii List of Figures vii 1 Introduction 1 1.1 Thesis Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2 Shared Reality Lab . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2 Literature Review 4 2.1 Immersive Displays . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.2 Locomotion in Immersive Enrivonment . . . . . . . . . . . . . . . . . 6 2.3 Haptic Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.4 Video Game and Immersion . . . . . . . . . . . . . . . . . . . . . . . 7 3 Infinite Runner 9 3.1 System Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 3.1.1 Haptic floor architecture . . . . . . . . . . . . . . . . . . . . . 10 3.1.2 Motion capture architecture . . . . . . . . . . . . . . . . . . . 14 3.2 Game Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 3.2.1 How to play . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 3.2.2 Motivations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 3.3 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 3.3.1 Frustum Update . . . . . . . . . . . . . . . . . . . . . . . . . 18 3.3.2 Character Movement . . . . . . . . . . . . . . . . . . . . . . . 19 3.3.3 Jump Detection . . . . . . . . . . . . . . . . . . . . . . . . . . 21 3.3.4 Slide down detection . . . . . . . . . . . . . . . . . . . . . . . 26 3.3.5 Haptic feedback . . . . . . . . . . . . . . . . . . . . . . . . . . 30 3.4 Experiments and Results . . . . . . . . . . . . . . . . . . . . . . . . . 32 3.4.1 Methology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 3.4.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 v
  • 6. 4 MINIW 39 4.1 System Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 4.2 Magic Tiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 4.2.1 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . 44 4.3 Wakaduck . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 4.3.1 How To Play . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 4.3.2 Motivations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 4.3.3 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . 49 Server Side . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 Unity Side . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 4.3.4 Game features analysis . . . . . . . . . . . . . . . . . . . . . . 55 4.4 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 5 Conclusions and Future Work 59 5.1 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 5.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 A User Testing Documents 63 B Acronyms 69 Bibliography 70 vi
  • 7. List of Figures 3.1 High level architecture diagram of the system . . . . . . . . . . . . . 9 3.2 Haptic floor architecture diagram . . . . . . . . . . . . . . . . . . . . 11 3.3 Motion capture system architecture . . . . . . . . . . . . . . . . . . . 14 3.4 Headset with reflective markers on it used to track the user movements 15 3.5 In-game snapshots of Infinity Runner . . . . . . . . . . . . . . . . . . 16 3.6 Camera frustums (left) and rendered scenes (right). The rendered scenes are montages of 4 cameras: left, front, right and floor. On the top row, the child node (i.e., the eye position) is located, and thus the vanishing points on the rendered scenes are in the center of each images. On the bottom row, contrarily, the vanishing points are shifted due to the asymmetry of the frustums. . . . . . . . . . . . . . 18 3.7 Haptic floor (left) and virtual floor (right), showing the range of val- ues that can be assumed by them. It was only taken into account the x-axis as the player has control only over the virtual character movements along this axis. . . . . . . . . . . . . . . . . . . . . . . . . 19 3.8 Sequence of actions in a standing vertical jump . . . . . . . . . . . . 21 3.9 Vertical jump as seen by the motion cameras and the FSR sensors . . 21 3.10 Captured data of a participant running and jumping around the hap- tic floor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 3.11 Flowchart showing the main operations executed by the server (left side) and Unity (right side) to correctly detect a jump . . . . . . . . . 25 3.12 Sequence of actions in a squat movement . . . . . . . . . . . . . . . . 26 3.13 Squat movement as seen by the motion cameras and the FSRs . . . . 26 3.14 Capture data of a participant executing different movements on the haptic floor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 3.15 Average collected coins and hit oblstacle rates for each participant, divided for haptic and audio sessions . . . . . . . . . . . . . . . . . . 35 3.16 Results of the post-session questionnaire for Group#1 (top) and Group#2 (down) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 3.17 Results of the post-test questionnaire . . . . . . . . . . . . . . . . . . 37 4.1 System architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 4.2 MINIW . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 vii
  • 8. 4.3 Electrodes & Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . 44 4.4 In-Game screenshot of Wakaduck . . . . . . . . . . . . . . . . . . . . 47 4.5 How to stand on MINIW while playing Wakaduck . . . . . . . . . . . 47 4.6 Force bars used when playing Wakaduck . . . . . . . . . . . . . . . . 48 4.7 Sensors position within the tiles . . . . . . . . . . . . . . . . . . . . . 50 4.8 Server code flowchart . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 4.9 Perspective view of the game field . . . . . . . . . . . . . . . . . . . . 55 4.10 Photo examples taken during MINIW demonstration at TEDxMon- treal & Maker Faire Ottawa . . . . . . . . . . . . . . . . . . . . . . . 57 4.11 Plot of participants against number of hit ducks . . . . . . . . . . . . 58 viii
  • 9. Chapter 1 Introduction Imagine to play a first-person shooter (FPS) using special components to create a sense of immersion in the virtual reality (VR) world. You can look around and see all the environment surrounding you, with your allies fleeing for their lives from hostile fire. And like them, you also are able to run away in order to search for a safe hiding place. As your feet make contact with the ground, you can hear the gravel creaking under your weight; not only, you can also feel it under your feet, as you were really there. Suddenly, a grenade explodes, and you sense the ground vibrate while the debris hit you on the back. In recent years, we have witnessed the emergence of a growing number of in- creasingly sophisticated systems able to provide compelling graphical and auditory effects related to the interaction with a VR environment. Exploiting the use of these devices, Yoon et al. [1] designed a game interface able to augment the user’s level of immersion for the Unreal® Tournament 2004 FPS. The system in question is com- posed of a head-mounted display (HMD) used for showing the visual information, a 5.1 channel headphone for the auditory information, a head tracker and a data gloves for making the interaction with the virtuarl world more natural than the one obtained by playing the video game with a computer (the latter component is used to recognize the user’s hand gestures). Lugrin et al. [2] did a similar experiment, developing an immersive stereoscopic experience through a four-screen CAVE-like installation of an already-existing commercial computer FPS. Both experiments had the aim to compare the desktop version of the chosen video game with its immer- sive counterpart, and both of them showed that the users have much preferred the second version. The two examples just cited (like so many other similar projects) have focused on researching about which was the best way to provide the user with a visual and auditory immersive experience, and on how to interpret the player’s body movements through motion sensors as perceived input and turns it into useful controls. However, in order to create a sense of full immersion, all five human senses (vision, hearing, touch, smell and taste) should perceive the digital environment to be physically 1
  • 10. 1 – Introduction real. While stimulating the senses of smell and taste is not so common in the video gaming world, the use of the sense of touch has been widely employed in a variety of video games since game controllers with embedded vibration actuators became available. As Burdea stated, the “haptic feedback is a crucial sensorial modality in virtual reality interactions“ [3], that can be effectively employed to enhance the experience of events happening on the screen. For example, when playing a racing car game, a haptic feedback could be generated to alert the player whenever the car collides against an obstacle (i.e., a wall or another car). Certainly, game controllers with vibration feedback can offer some degree of feeling within the video games, but the rumbles on your hands hardly count as being immersive or lifelike. In order to fill this lack of availability of immersive haptic devices, recently a multitude of new haptic interfaces has appeared on the market, capable of delivering engaging sensory stimuli to different parts of the human body and, most importantly, at a reasonable cost. For example, the KOR-FX1 Haptic Gaming Vest is able to convert the sound coming from the video game (or any other audio source you are playing) into haptic feedback, creating a subwoofer-like vibration in proportion to its strength. The Gloveone2 , instead, is a virtual reality glove able to provide haptic feedback that is felt by the user through his hand and fingers. However, when the pedestrian movement is involved, the user may complain that receiving a haptic feedback on the hands or on the sternum does not feel realistic at all, causing a lowering in the sense of immersion in the virtual world. In order to be able to feel the terrain on which he is walking, the user should be provided with a haptic feedback underneath his feet, just as it happens in the real world. This result can be effectively achieved exploiting the potentiality offered by the haptic floor (also known as vibrotactile floor), designed and built by Visell et al. [4]. This is a special surface that can simulate the feel of walking on different ground material, such as snow, grass or pebbles. It consists of a matrix of square tiles, each of which has a linear motor actuator bolted to its underside; moreover, they come with a force sensing resistor (FSR) embedded on every corner (i.e., each tile has four different FSR); the signals generated by the sensors are conditioned and subsequently digitized by a microprocessor board, that transmits via serial data link the force data to a computer running a software simulation written using Max/MSP visual programming language. The simulation generates independent audio signals for each tile, which are used to drive each corresponding actuator via an audio amplifier. The thesis’ primary focus is to understand how the haptic floor could be employed to consistently enhance the players’ experience or their performance in gameplay. 1 http://korfx.com/ 2 https://www.gloveonevr.com/ 2
  • 11. 1 – Introduction It is unknown whether haptic feedback is more effective if delivered to a body part that would normally experience such feedback in real-world conditions, e.g., to the feet vs. the hands if the interaction involves pedestrian movement. Gaining a better understanding of these issues will allow for improved game design and simulation of immersive, multimodal virtual reality experiences. The thesis’ secondary focus will be to investigate whether the haptic floor could also be used as an input device and not simply as an output one. The idea is that the players can be provided with some haptic cues instead of visual ones, that would permit them to understand the status of the video game so that they can change their actions if necessary. 1.1 Thesis Outline The remainder of the thesis is organized as follows. Preceding research on immersive environments and haptic technologies is reviewed in Chapter 2. In Chapter 3, a complete description of an immersive experience developed exploiting the potential offered by the vibrotactile floor is presented. Also, Chapter 3 presents the results obtained from an experiment designed to explore the role that the haptic feedback generated by the haptic floor can play in enhancing a player’s experience while playing a video game. In Chapter 4, the experiences developed using MINIW, a 2×2 haptic floor platform, is described. Finally, conclusions and possible future works and enhancements are presented in Chapter 5. 1.2 Shared Reality Lab The thesis project described in this document took place within the Shared Reality Lab3 , a facility part of the Centre for Intelligent Machines (CIM) research group at McGill University, Montreal (Canada). The lab is broadly concerned with human-computer interaction technologies, em- phasizing multimodal sensory augmentation for communication in both co-present and distributed contexts. The research carried out by the members tackles the full pipeline of sensory input, analysis, encoding, data distribution, and rendering, as well as interaction capabilities and quality of user experience. Applications of these efforts include distributed training of medical and music students, augmented en- vironmental awareness for the blind community, treatment of lazy eye syndrome, low-latency uncompressed HD videoconferencing and a variety of multimodal im- mersive simulation experiences. 3 http://srl.mcgill.ca/ 3
  • 12. Chapter 2 Literature Review The goal of this background chapter is to provide an introduction on the major tech- nological breakthroughs that have been made in the field of virtual reality applied to gaming. These previous research efforts may be categorized into three distinct hardware groups: immersive displays, locomotion systems in immersive environment and haptic devices. Finally, it also provided an investigation on research aimed to define techniques for analyzing the video games’ immersion level. 2.1 Immersive Displays The history of virtual reality has its origins in the inventions of Morton Heilig, who in 1962 patented the Sensorama [5], a cabin with stereoscopic screens, stereo speakers and a moveable chair. This device involves several human senses: allows the user to watch through a stereoscopic viewer real images shot using two cameras, provides tactile feedback generating vibrations in the seat and the handlebars, uses a hair dryer to simulate the wind at different speeds and, finally, generates an olfactory feedback. In 1968 Ivan Sutherland created the first HMD called The Sword of Damocles [6]. The system consisted of two monitors (one for each eye) mounted on a device an- chored to the ceiling and fastened on the user’s head. It was capable of tracking the head position, whose movements were sent to a computer for generating the proper perspective (of a wireframe cube), giving a primitive illusion of being in a virtual world. Following these milestones, VR has increasingly been used in the gaming field to provide players with the most immersive experience possible. In the early 1990s a company called Virtuality Group introduced VR to arcade video games. This result was achieved by employing the Virtuality cabinets [7], huge oversized units where players stepped in, placed virtual goggles over their heads and put themselves in a three-dimensional gaming world. The game unit was provided with several 4
  • 13. 2 – Literature Review games, including some of the most famous arcade games at that time like Pac- Man and Legend Quest. In 1998 the company developed a consumer VR display in partnership with Philips Electronics, but it didn’t have much success. In 1993 Sega announced the Sega VR [8] headset for the Sega Genesis console. The headset was equipped with liquid-crystal displays (LCDs) in the visor, stereo sound and some tracking sensors so as to track the user’s head movements. How- ever, due to technical development difficulties the device remained forever in a pro- totype phase. An artistic application of such head-mounted displays is Osmose [9] developed by Davies and Harrison in 1996. The virtual environment consisted of semi-transparent image layers to generate nature or cyber scenes. The position of the first person was controlled by a respiration sensor and a weight tracker. By breathing faster or slower, the user could move up or down, respectively, while the weight controlled the horizontal movement; this system took inspiration from diving. It is only with the beginning of the new millennium that VR has begun to get a great appeal, mainly due to the costs reduction that has allowed the general public to have access to previously inaccessible technologies. The most famous device that will be released in the early months of 2016 is the Oculus Rift [10], that has had the honor to once again bring the world’s attention back on VR. Thanks to this device the dream of most kids grown up in the ’90s will come true, i.e., the desire of having a wearable viewer capable of making us virtually explore any location, immersing us in virtual worlds created by a large number of developers. Born as a product merely intended for a gaming audience, with its acquisition by Facebook Oculus has become much more than a mere accessory for gamers; one of the most exciting and fascinating non-gaming application of VR includes the work done by Gerardi et al. [11], that developed Virtual Reality Exposure Therapy to assist veterans in the treatment of Posttraumatic stress disorder (PTSD) by reconstructing events in a safe virtual environment controlled by the patients. The system was initially developed using an Emagin Z-800 3D visor, but researchers seek to incorporate the Oculus Rift once the final version will be available so as to include scenarios specifically for military sexual trauma, with the idea of not to create sexual assaults, but instead the context in which they occurred. Since strictly correlated to the work shown in this document, it is worth mention the research carried out by Dan Sandin, who at ACM SIGGRAPH ‘93 demonstrated what was called a CAVE [12]. This is a system devoted to create an immersive experiece by surrounding a user with four projection screens on the left, center, right and floor. Although LCDs are not cost-effective to surround a user, projection can change the screen size by adjusting the distance. There are dome types as well, for which distortion has to be taken into account in graphics rendering [13]. Nonetheless, these platforms require a dedicated space. If a room has white walls, they can be used as screens even they are not flat, square surfaces. To do so, perspectives must be corrected by obtaining their geometry relative to the projectors. To acquire the 5
  • 14. 2 – Literature Review geometry, fiducials and/or structured light can be used together with a camera. However, there is a challenge that a standard camera lens cannot capture all the walls at the same time. Garcia-Dorado et al. [14] used a mechanically controlled camera to orient the angle to face each wall to solve this problem with a single camera. Hashimoto et al. [15] proposed a system with a fish-eye lens to capture the entire surface. Recently, Jones et al. [16] developed RoomAlive, which uses several pairs of depth cameras and a projector to acquire geometry and project virtual contents. Not only the surface geometry but also the skeletal model of a user is tracked by the depth sensors so as to interact with the contents within Unity game engine. 2.2 Locomotion in Immersive Enrivonment One of the most intriguing aspect when developing a VR environment is how to enable the player to navigate within it. Due to the physical constraint of a CAVE platform, several virtual locomotion methods have been proposed. For example, Cirio et al. [17] came up with three different solutions. In the first one, virtual signs are rendered in the graphical environment as a metaphor of traffic signs to navigate the user. Second, a virtual rope is rendered around the user, allowing him to move within the virtual world by virtually pushing the rope using his hands. Finally, they introduced Virtual Companion, i.e., a bird with virtual reins attached to it that the user can “grasp” in order to be carried around within the world. However, virtual locomotion systems such as the ones just cited do not require kinetic motions of legs. To accomplish kinetic input while keeping the user on a spot, a treadmill [18] or a low-friction surface [19] can be used. For example, Fung et al. [20] developed a system with a stereoscopic screen and self-paced treadmill with 6-DOF motion platform for gait training. For an entertainment application, VibroSkate by Sato et al. [21] uses a skateboard metaphor to achieve kinetic lo- comotion. The left leg stays on a skateboard affixed to the platform and a user kicks a treadmill next to the skateboard by the right foot to virtually move in the environment. Moreover, transducers attached to the skateboard produces vibration according to the virtual speed and ground condition. Graphics are generated by Unity game engine and projected on the front and floor screens. It is worth noting that, by introducing a solid ring around the body of the user for safety reasons, this technology can also be employed not only with a CAVE but also with HMDs such as Oculus Rift for commercial applications. Examples of such systems include the Cyberith Virtualizer [22] and Virtuix Omni. Virtusphere by Medina et al. [23] is another example of a treadmill. Essentialy, this device is a giant plastic hamster ball that lets the users feel like they were walking through a virtual world. Once entered within this sphere, it is possible to move about freely, i.e., an individual can run, jump, move from side to side and, 6
  • 15. 2 – Literature Review virtually, act like it was a real-world scenario. 2.3 Haptic Devices Haptic feedback has in general an important role in augmenting the level of immer- sion in VR systems. There are two kinds of haptic feedback: tactile and kinesthetic feedback. An early example of kinesthetic feedback system is Phantom [24], which provides feedback on fingers using DC motors, giving the illusion of physically touch- ing objects in the cyberspace. PHANToM OMNI haptic device [25], instead, is a pen device attached to a mechanical arm, which is manifactured for research purposes. SPIDAR by Sato [26] has a ball attached to strings; the user holds the ball, and force feedback is provided by the tension of the strings. There are commercial devices, especially for gaming, as well: for example, Novint Falcon 3D Touch controller. For tactile feedback system, electro-tactile displays are proposed by Kajimoto et al. [27]. HORN Ultrasound Airborne Volumetric Haptic Display by Inoue et al. [28] is a non- contact, mid-air tactile feedback device which uses an array of ultrasound speakers to transmit energy to the hand. Fairy Lights in Femtoseconds by Ochiai et al. [29] displays hologram using a femtosecond laser, and the energy of the laser gives haptic feedback. For haptic interaction on feet, Visell et al. [4] built a vibrotactile floor to synthe- size virtual ground textures. There are several virtual environments proposed; for example, the fluid simulation uses a particle system to react to a footstep simulating bubbles for graphics and haptics rendering. In the snow example [30], the visual effects are simulated by modifying a height map in real-time although the preset of haptic effects does not simulate compression of snow. 2.4 Video Game and Immersion Immersion is a word often used to describe an aspect of video games, but there is no clear definition. According to the study by Brown and Cairns [31], while reaching total immersion, gamers encounter several barriers. For the first stage, engagement, there must be a motivation to play the game, then the gamer has to understand the control of the game. Next, the gamer spend time and put effort and attention to play. The second stage is engrossment. To emotionally involved in the video game, its quality of, especially, graphics, tasks and plot must be well designed. The last stage is total immersion when gamers are absorbed in the video game and no longer care about the surroundings. This needs empathy to the character and how atmosphere of the graphics, plot and sounds related to the game world. In the follow-up study by Cheng and Cairns [32], participants were asked to play Unreal Tournament, and at the midpoint, the game’s theme (environment texture, physics 7
  • 16. 2 – Literature Review parameters) is changed. Surprisingly, the participants were not surprised by the change and some of them even didn’t notice the change. Hazlett reported a method to detect positive valence using biosignals [33]. EMG of facial muscles are measured while playing a racing video game. Results are mostly biological data. Game events are classified into positive (e.g., overtake) and negative (e.g., go off road) events. They found positive emotions can be measured by EMG. Study of movement-based video games is done by Pasch et al. [34]. In the first experiment, qualitative analysis of Wii Sports was done by user interviews. Subse- quently, quantitative analysis of Wii Sports Boxing is done. In the second experi- ment, videos are recorded and 5 observers rate how close they look to real boxing. Two strategies: Game (movement to trigger punch in the game) and Simulation (simulate playing boxing). The game strategy led to high frequency of punches but with small motion and less engagement. The simulation strategy involves defense motions although they are not necessary. This strategy is used when gamers want to relax. Immersion happens when the player feels empathy with the avatar mimicking the motion. 8
  • 17. Chapter 3 Infinite Runner The aim of Chapter 3 and Chapter 4 is to report in detail the work carried out with the present thesis. In particular, in the following paragraphs we will focus on Infinite Runner, an endless running game developed with the intent of exploring the potentiality offered by the haptic floor. We will begin by providing a technical overview of the VR environment in used, and then subsequently focus on the features of the game. Finally, in the last part of the chapter we will present the results of the conducted experiment. 3.1 System Architecture Figure 3.1: High level architecture diagram of the system The system in use is a composition of different subsystems interacting between them. Figure 3.1 shows an overview of all the main components involved: 9
  • 18. 3 – Infinite Runner • CAVE: immersive virtual reality environment, consisting of 3 large screens on which are projected the images generated by a VR application. The en- vironment also includes a motion capture system consisting of eight different cameras placed on top of the CAVE’s frame; this system allows us to track the user movements. The haptic floor is housed within this environment and, not only it acts as a fourth display surface, but it also provides a realistic multimodal feedback that enhances the overall immersion. • Mac mini array: array of six Mac mini computers aimed to receive data from both the haptic floor and a Windows machine running our videogame. Each computer is responsible to synthesize haptic feedback for one row of six tiles based on the incoming data. • Graphic manager: Windows computer with the task to execute our infinite runner videogame. The information coming from the motion cameras and the haptic floor is used as an input to update the state of the game. The game status is constantly sent to the Mac mini array, which will be used to update the haptic feedback in real time. In the following sections we will discuss in more detail the characteristics of the haptic floor and the motion capture system, and how they interact with our application. 3.1.1 Haptic floor architecture The haptic floor is a complex system consisting of many elements both hardware and software. The aim of this section is simply to give a brief, albeit detailed, overview of the components shown in the diagram in Figure 3.2. To have a much more technical analysis of the system, the reader is invited to have a look at the works done by Visel et al. [4] [30] [35], the creators of the vibrotactile floor. The haptic floor consists of a 6×6-squared-tile-surface, each tile containing four FSR sensors, one in each corner, and a tactile transducer in the middle. This means that the system has in total 144 sensors and 36 transducers. The sensors are used in order to detect the force that a user standing on the floor is currently applying on it, while the transducers are used to execute the synthesize sounds that should simulate the haptic feedback by meaning of the resulting vibrations. All the sensors within a row are connected to one Gluion1 unit, which includes 4x6 Analog-to-Digital Converters (ADC) sampling FSR voltages and converting them into a numerical format. Each unit also comprehends an internet interface that is 1 http://www.glui.de/ 10
  • 19. 3 – Infinite Runner Figure 3.2: Haptic floor architecture diagram used to broadcast all the raw values via User Datagram Protocol (UDP) through the Open Sound Control (OSC) protocol2 . In order to acquire data from all the sensors, six different units are employed, controlling 24 sensors each. 2 http://opensoundcontrol.org/ 11
  • 20. 3 – Infinite Runner OSC is a protocol for communications between computers, synthesizers, audio and other multimedia devices. It was designed to support a client/server architec- ture. Each OSC message consists of three parts: • Address pattern: Arbitrary sequence of characters preceded and possibly in- terspersed by a “/”. It represents the name of the message specified by the client and, through the use of the delimiter, it is possible to create hierarchies of messages according to the model directory/file. • Type tag string: String whose sequence of characters is used to specify the type of the data sent. It is not mandatory to indicate the nature of the data, but it is highly recommended. • Arguments: Data contained in the message. All the packets generated by every Gluion unit have an address pattern equal to “/analog", and contain the 24 values outputed by the ADCs. The recipient of these packets is an array of six Mac mini computers, each of which is responsible in managing the data coming from a single unit. This means that each unit sends data always to the same machine, i.e., the Gluion receiving data from the sensors placed in the first row sends data to the first Mac mini, the second Gluion sends data to the second Mac mini and so on. Each Mac mini is responsible for generating haptic feedback through a Max/MSP3 patch only for the row of six tiles from which it receives the data. However, we must note that the information regarding which feedback should be generated for each individual tile and what intensity to use are not coming directly from the Gluion units. All the PCs constantly execute a program written in Java lan- guage with the aim of accepting new inbound OSC messages, but these are not directly parsed: the program simply rebroadcasts the incoming “/analog” mes- sages from the Gluion units to a NIW Server after updating the address pattern in “/niw/server/update/row/Mac_mini#”, where Mac_mini# is the machine num- ber from which the message is being sent. In the terminology of the architecture, a computer running this program is said to be a “NIW slave"; all the Mac minis belong to this category. The NIW server is none other than Mac mini #1 which, in addition to executing the program that makes it a slave machine, is also running another program that allows performing simple filtering and analysis operations on the base of the incoming pressure data from all the Gluion units (via the various NIW Slave instances). One of the most important operations executed by the server is to convert the incoming data from a raw format (i.e., one data point per sensor) to a tile format 3 https://cycling74.com/ 12
  • 21. 3 – Infinite Runner (i.e., one data point per tile); in this way, each row has only six associated values instead of 24. Exploiting the fact that Max/MSP has a native support for OSC, these values are inserted as arguments in an OSC message, and sent to the Max/MSP patch run on the computer in charge of managing the row to which they refer (e.g., the six values of the first row will be sent to the patch running on Mac Mini #1, the six values of the second row will be sent to the patch running on Mac Mini #2, etc.). The pressure values will be used by the patches as parameters for the physical model employed for the generation of the selected haptic feedback; the latter is just an audio signal that, using some analog connections, will be sent and reproduced by 36 haptic transducers placed underneath the tiles (one for tile). In order for this system to work correctly, it is essential that the patches present on the each Mac mini send an OSC message to the server to inform it on which address pattern to use while sending pressure data. For each Mac mini, the server will save the pair Mac_Mini_#/Address_Pattern, so that to be able to notify all the machines whenever the floor status is updated. The server does not exchange information only with the other Mac minis, but also and especially with a Windows machine responsible for the execution of our VR applications by means of Unity game engine. In particular, the server notifies the running application about the changes in the floor status by sending many OSC messages: for example, by analyzing the raw data coming from the sensors, it’s possible to state whether someone is standing on the floor and the position of his feet within the CAVE, and even if he makes a jump (a detailed description of how the system can detect when someone standing on the floor performs a jump will be given in Section 3.3.3). All the data contained within the OSC messages will be used as inputs by the application to update its status. In order for the server to be able to properly communicate with the application, it is necessary that the latter sends during the startup phase a message indicating which address pattern should be used when sending notifications about one specific event. This means that it will send as many messages as the number of services needed from the server: for example, if the application requires to be notified whenever a user is standing on the floor and when he performs a jump, it will send two OSC messages indicating two different address patterns. Finally, the application has the ability to define what feedback should be gener- ated by each individual tile whenever a user will step on it. This is done by sending an OSC message to the server containing for each tile the name of the feedback preset to be associated with it. The server will then forward the information to all the Max/MSP patches. Moreover, the application has also the possibility to trigger a neutral feedback from a specific tile even when no one is stepping on it. For more details on how our application exploits the potentiality offered by this new designed system, the reader is invited to view Section 3.3.5. 13
  • 22. 3 – Infinite Runner 3.1.2 Motion capture architecture Figure 3.3: Motion capture system architecture In addition to the haptic floor, the other key element of our architecture is the motion capture system used to track the player movements. As depicted in Fig- ure 3.3, the setup consists of eight Vicon Bonita B10 motion cameras4 arranged on top of the CAVE frame in a strategic order so that to “capture” all the space con- tained within it (i.e., the cameras are facing the haptic floor). These high resolution cameras emit special strobe lights, which is reflected back by small spheres (markers) covered with a retro-reflective substance; as showed in Figure 3.4, those markers are placed on a headset that can easily be worn by the user whose movements we want to track. The reflected light is captured by each single camera, and the resulting images are sent via ethernet to the Windows machine executing the specialized software called Vicon Tracker. The aim of this application is to locate the markers seen through the cameras, and to record them as 3D coordinates. The markers placed on the headset are defined in the application as a rigid body, i.e., as a virtual object composed from a specified set of markers with a relatively fixed proximity to one another. In other words, those markers are considered as a whole and not as single objects. 4 http://www.vicon.com/products/camera-systems/bonita 14
  • 23. 3 – Infinite Runner Figure 3.4: Headset with reflective markers on it used to track the user movements One of the most useful features of Vicon Tracker is to have a built-in Virtual- Reality Peripheral Network5 (VRPN) server, through which the application will stream natively the position and orientation data of all the defined rigid bodies; in our case, the only data that are broadcasted are the ones associated to the headset. However, these data cannot directly be received by our video game built using Unity, unless it is not equipped with a VRPN client. Taking advantage of the fact that our application already contains an OSC client for receiving data from the haptic floor, it was decided to exploit the functionalities offered by the Vrpn-OSC-Gateway6 project and, doing so, standardize the system so as to have to deal only with OSC messages. Simply speaking, this small application receives the tracking data directly from Vicon Tracker, coverts them and, finally, sends them as OSC messages. As we will see better in the coming sections, the data contained within those messages are essential for the correct functioning of our video game. 5 https://github.com/vrpn/vrpn/wiki 6 https://code.google.com/p/vrpn-osc-gateway/ 15
  • 24. 3 – Infinite Runner 3.2 Game Design In this section it will be given a thorough description of the main characteristics of the VR application and an explanation of the reasons that have lead to the development of a video game of such a genre rather than another one (like an open world video game). 3.2.1 How to play Figure 3.5: In-game snapshots of Infinity Runner Infinite Runner falls under the genre of infinite running games, in which the virtual character is continuously moving forward through a procedurally generated, theoretically endless game world. In the game, the player controls a soldier who, having broken into a castle to steal its treasures, is chased by a dragon that wants to burn him alive. The goal of the game is to collect as many coins as possible while avoiding all the obstacles that will be encountered along the way. In Figure 3.5 it is possible to observe some screenshots of the final version of the game. The application was developed using the toolkit called “Infinite Runner Starter Pack”7 , which provided us with an already functioning gaming system. The choice of using this Unity Asset instead of afresh developing an entire new game system was dictated by the fact that the thesis’ main purpose was to enhance the user experience of a game that had already been considered fun and immersive when played in a regular mode, i.e., played on a computer or on a mobile device. This 7 https://www.assetstore.unity3d.com/en/#!/content/8949 16
  • 25. 3 – Infinite Runner decision made it possible to let us concentrate on how to make the most out of the available immersive VR environment rather than spending time designing the game itself. With the original version of the game (i.e., the one played using a normal com- puter and a keyboard) the player, sitting comfortably on a chair, is able to press either the left or right arrow key in order to move the virtual character to the left or right for collecting coins or avoiding objects. When he needs to turn left or right at a crossroads, he can simply press the arrow key in the corresponding direction. If he wish to jump over an object, he can press the up arrow key, while if he wish to slide under an object, he can press the down arrow key instead. What we want now is the player to be able to play the game in a much more immersive way, allowing him to experience the functionality offered by the SRL’s CAVE. As a consequence, all the actions just listed will no longer be simply performed by the virtual character as a reaction to a key pressed on a keyboard, but the player himself will have to phisically execute them. In other words, the player has to impersonate the virtual character: • First, the player should have the possibility to move to the right or to the left within the perimeter of the haptic floor so as to make the approaching coins shown on the screen “hit” his body and, doing so, collect them all; similarly, his movements should also allow him to avoid the obstacles. • The player should be able to turn to the right or to the left at a crossroads in an intuitive way whenever necessary. • Finally, in order to avoid some particular obstacles the simple movement may not be enough. In all these cases, the player should be able to either jump over or slide under the obstacles, according to the necessity. In addition to all these features, the player should also be provided with a tactile feedback from the vibrotactile floor as a consequence of his actions, so as to make his experience as much immersive as possible. The following paragraphs are devoted to present a comprehensive analysis of how all these aspects of the game have been implemented using the data coming from the motion cameras and, above all, from the haptic floor. 3.2.2 Motivations The decision to develop an endless running game was not casual, but it was dictated from an intrinsic limitation of any CAVE environment: the walking area is restricted by the physical space. As mentioned in Chapter 2, there are several solutions to overcome this problem, with most of them requiring the use of a special pointer. As a result, the player does not physically move, but he can simply press a button to 17
  • 26. 3 – Infinite Runner achieve the desired result. In my opinion the use of such a device causes a break in presence (BIP) from the virtual environment. This led me to implement Infinite Runner, in which it’s the virtual world that moves around the player; although the latter does not have the freedom to physically navigate within the virtual world, the sensation of having an environment around him that keeps moving over time will provide him a feeling of movement, as if he really was running in that world. 3.3 Implementation 3.3.1 Frustum Update Figure 3.6: Camera frustums (left) and rendered scenes (right). The rendered scenes are montages of 4 cameras: left, front, right and floor. On the top row, the child node (i.e., the eye position) is located, and thus the vanishing points on the rendered scenes are in the center of each images. On the bottom row, contrarily, the vanishing points are shifted due to the asymmetry of the frustums. In an immersive system, virtual objects must be rendered in a certain method so that the viewer can perceive a parallax effect. In order to do so, the physical setup must be correctly mapped to the virtual environment. The unit lengths in the game engine and the motion capture system is meter. The motion capture system is calibrated so as to have its origin located at the center of the floor and it is tracking the user’s head position. In the virtual environment, there are parent and child nodes using a scene graph. The parent node represents the physical origin (i.e., the center of the floor), and it can be moved to an arbitrary position in the virtual 18
  • 27. 3 – Infinite Runner environment to “teleport” to another position. The node has a bounding box, which has a fixed dimension of 2.4 m × 2.4 m × 2.4 m, centered at 1.2 m height from the physical origin. The bounding box is defined to represent the physical screens of the CAVE. The local position of the child node is updated to be the tracked head position. For graphics rendering, since the setup consists of flat rectangle screens, camera models can be defined for each screen [12]; for our setup, there are four of them: on the left, right, front and bottom of the user (Figure 3.6). The camera frustums must be updated with respect to the user’s eye positions. For a monoscopic setup, the camera position must be at the midpoint of the eyes. Therefore, the camera position is approximately the position of the child node. The near clipping plane is a plane of the bounding box which represents the physical screen. In practice, virtual objects can exist closer than the physical screen. Thus, the top, bottom, left, right and near parameters are multiplied by x < 1 to make the near clipping plane closer to the viewpoint to render such objects (for Infinite Runner, we use x = 0.0625). 3.3.2 Character Movement Figure 3.7: Haptic floor (left) and virtual floor (right), showing the range of values that can be assumed by them. It was only taken into account the x-axis as the player has control only over the virtual character movements along this axis. After implementing a system that allowed us to obtain a perspective correction based on the user’s head position, the next problem that must be taken into account is how to use the data coming from the motion cameras to make the invisible vir- tual character move according to the player’s movements. This is mainly done by performing a mapping between the physical and virtual coordinates. As depicted in Figure 3.7, the motion tracking values range between −1.2m and 1.2m, with the haptic floor enclosed in the area −0.9m/0.9m; each tile has a length of 0.3m. The 19
  • 28. 3 – Infinite Runner virtual coordinates, instead, vary between −2 and 2, with a sensitivity of 0.1. This represents the amount of motion that the virtual character is able to perform at a time, which cannot move at will from one position to another; for example, it is not possible to make it move from position 0.5 to position 1.0 in one single movement, but it will take five consecutive translations along the x-axis to reach the final des- tination. In other words, assuming to play the game with a keyboard, this is the movement that we would get any time we press an arrow key. The virtual character can then be placed in 40 different positions within the virtual world. It was decided to divide the virtual world in this specific number of positions as a compromise be- tween having a fairly smooth movement and limiting the amount of delay introduced (delay due to the non-immediacy of the movement from one position to another one not directly accessible). Whenever a new packet is received from the motion cameras, the first operation that is performed is to divide the coordinate value along the x-axis by 1.2, so that it will assume a value between −1 and 1. The next step is to check wheter this value is contained within the range −0.9/0.9 and, if so, multiply it by 0.045, remembering to take into consideration its sign; this number represents the ratio between 0.9 and 20, and it is used to define which position the virtual character should be placed in (position ranging between −20 and 20). In case the value is out of boundaries, and this can happen if the player is on the black frame limiting the haptic floor, the virtual player is assumed to be either on the rightmost or leftmost side of the virtual world (i.e., either in position −20 or 20), depending on the sign of the coordinate value. The value of the new computed position will now be compared with the current one, and, if they are different, the virtual character will be shifted either to the left or to the right depending on the sign of the subtraction; for example, if the current position is equal to 12 while the new one is equal to 14, we will have 14 − 12 = 2, corresponding to two consecutive translation to the right. Turn Left/Right Correlated to this argument, another problem that needs to be solved is how to let the player turn either to the right or to the left when facing a crossroads. We thought about many solutions, among which the employment of the speech recog- nition functionality offered by Kinect so that the player would have been able to just say “Go Left” or “Go Right” in order to trigger the virtual character turning ability; however, this solution proved to be unfeasible mainly due to the big delay introduced and the small intuitiveness of the system. We have therefore decided to take advantage of the character movement system described above as follows: whenever the virtual character is in a position greater than or equal to 14 (i.e., whenever the player is physically located in the rightmost column of the haptic floor), its turning ability to the right will be triggered. Sim- ilarly, the turning ability to the left will be activated every time that the virtual 20
  • 29. 3 – Infinite Runner character is in a position lower than or equal to −14. All the logic used to check whether the character can really turn along a specific direction is directly provided by the original Unity asset. Here, it is enough to say that it is possible to turn along a certain direction only if we are currently standing on a turn platform (i.e., a virtual platform provided with a crossroads), and, especially, if the intersection allows us to go along that specific direction (it’s not possible to turn to the right if the only available road is to the left). This solution may not be really natural at first, but in our opinion it was the best solution we could implement without creating any BIP from the virtual world. In any case, all those who had the chance to try out the system were satisfied, saying that the idea was very intuitive after getting used to it. 3.3.3 Jump Detection Figure 3.8: Sequence of actions in a standing vertical jump Figure 3.9: Vertical jump as seen by the motion cameras and the FSR sensors When playing an infinite runner game, one of the most important ability which the player can rely on is the possibility to avoid a collision with an obstacle by jumping beyond it. In our game this ability is necessary in order to avoid obstacles such as chairs and tables, and most importantly to avoid ending up in the fire that would lead to lose the game. The detection system must comply to three constraints: 1. It must be as fast as possible, otherwise the player would not be able to promptly avoid obstacles due to latency. 2. It must not require too much computational power, i.e., it must not influence the frame rate of the game. 21
  • 30. 3 – Infinite Runner 3. It must be very difficult to have false positives, i.e., the player has to really jump to actually trigger a jump for the virtual player. Before describing how the jump detection system was implemented in our game, it is important to accurately understand and analyze the mechanics of the vertical jump. As described by Linthorne [36] and Boukhenous et al. [37], the jump move- ment can be seen as a composition of different sub-actions which are summarized in Figure 3.8: first of all, the jumper, starting from an upright standing position, makes a preliminary downward movement by flexing at the knees and hips until reaching an angle of about 90°; afther that, he immediately extends the knees and hips again to jump vertically up off the ground. Such a movement makes use of the “strech-shorten cycle”, where the muscles are pre-streched before shortening in the desired direction. Finally, the drop phase is performed with the knees extended, on tiptoe, with sub- sequent cushioning to prevent any trauma. In Figure 3.9 you have a reprensentation of a vertical jump from the point of view of the motion cameras (blue line) and the FSR sensors placed underneath the floor (red line). Analyzing this plot, it is possible to define several key times and phases of the movement: • A: Initial stage of the jump. The user is standing in upright position and stationary; the position received from the motion cameras matches the height of the user (since the markers are placed on top of the helmet put on his head). • A-B: The jumper relaxes his leg and hip muscles, allowing in this way the knees and hips to flex under the effect of the force of gravity; as a result, the user starts to move downward, causing a reduction in both the force applied on the floor and the position of the head. • B-C: Boost phase, during which the user increases the force applied on the floor. However, the user continues to move downward, causing a reduction in the head height. C is the point in which the acceleration is zero, i.e., muscular strength = weight force. • C-D: The resultant force applied on the floor is now positive, and as a result the user starts accelerating upwards. Note that even though the accelera- tion is positive, the user still continues to move downward. In D you have the maximum acceleration caused by the expression of the maximum muscle strength. • D-E: This is the so called "pushoff phase", where the subject extends the knees and hips and starts moving upwards; the force applied on the floor starts declining rapidly, while the head height is increasing. In E you have that the muscle strength is equal to the weight force. 22
  • 31. 3 – Infinite Runner • E-F: Phase in which the force exerted by the muscles becomes lower than the weight force, giving rise to a negative acceleration. • F-G: Ascending phase of the jump, during which the subject is still moving upwards, but he has started to slow down as a result of the force of gravity. The user’s feet are not anymore in contact with the floor. • G-H: Descending phase of the jump, where G is the peak of the jump, and H is the instant at which the user’s feet are back in contact with the floor, causing a peak on the force applied on it. The user flexes his hips and knees in order to cushion the landing. Eventually, in the moment that the user is back to be motionless on the floor, the force applied on it and the head height return to be equal to the values registered at the initial instant A. During the phase F-H the acceleration assumes a constant value equal to the gravitational acceleration. In order to be sure that a jump has always similar characteristics to those just described, a small experiment was conducted in which there was asked to a partici- pant to stand stationary within the CAVE in a standing position for few moments, and then to randomly move and perform some jumps within it; along all the test, the data coming from the motion cameras and the FSR sensors underneath the floor have been recorded. In Figure 3.10 there is a graphical representation of the recorded data. In the upper figure we have for each frame the value captured by the motion cameras, while the lower one shows for each frame the sum of all the values sensed by the FSR sensors. In both figures the moments in which the participant has performed a jump have been highlighted. By carefully analyzing these graphs, it is possible to notice how each jump always presents the same pattern: first of all, there is a downward movement, which causes an increase of the force applied on the floor; this is followed by an upward thrust phase, which takes the user to no longer make contact with the floor for a few moments (generating a rapid reduction of the FSR values); finally, there is the landing phase, in which the user returns to make contact with the floor, generating a peak in the force applied to it. However, it is unthinkable to identify a jump only after it is completed, because this would introduce a lot of latency in the game: even if the player physically executed a jump in time to avoid an obstacle, the virtual player would perform it much later, and this would probably lead to a collision. It is therefore necessary to extract some features from the graphs that can be used to correctly identify a jump in the shortest possible time. First, it can be observed from both charts how it is very difficult to identify when a jump starts, since the data for this phase gets confused with the ones you get when you move inside the CAVE. A similar observation can be made for the final phase 23
  • 32. 3 – Infinite Runner Figure 3.10: Captured data of a participant running and jumping around the haptic floor of the jump. Secondly, it is essential to stress out a peculiarity of the FSR values: as long as the user has at least one foot on the floor, the data that is obtained as output has always a value quite high. The moment that the user stops exerting pressure on the floor (i.e., during the ascending phase of the jump), the output data suddenly drops and assumes a very small value for several frames8 ; this value will eventually return to have a big value after that the user is landed and returns to exert contact on the floor. This is the key feature on which the jump detection system is based: after that the game has started, the server calculates for each frame the summation 8 Due to the weight of the each tile, FSR sensors always sense a force different from zero. Futhermore, drift current increases the output voltage. 24
  • 33. 3 – Infinite Runner of all the FSR sensor values. If the sum is above a certain threshold, something is exerting pressure on the floor, i.e., a player is assumed to be on it. When the sum returns below this threshold, and remains so for a defined number of frames, it is presumed that the player is currently in the ascending phase of a jump, and the event is immediately notify to Unity by sending an OSC message. However, the fact that the sum is returned to be less than the threshold could simply mean that the player has stepped out of the floor, and therefore no one is performing any jump. To avoid a false positive detection, when Unity receives the message from the server it is necessary to check the data coming from the motion cameras: If a steady increase along the y-axis has been registered in the last frames (i.e., the head position is moving upward), then it is possible to state that the player is actually jumping. As a result, a jump action will be triggered for the virtual player and, hopefully, it will be able to avoid the obstacle. In Figure 3.11 it is depicted a simple flowchart about all the operations just described. Figure 3.11: Flowchart showing the main operations executed by the server (left side) and Unity (right side) to correctly detect a jump In order make the system as efficient as possible, we need to carefully define the threshold value: a value too low would not allow us to correctly identify many jumps, while a value too big would cause too many false positives. Moreover, it is also necessary to define the number of frames that must be analyzed to be able to 25
  • 34. 3 – Infinite Runner correctly state that the player is no longer applying force on the floor. After several tests performed with the SRL components, it was decided to use the empirically- chosen values 25.000 and 5 for the threshold and the number of frame, respectively. It is good to point out that the time when the player starts a jump and the one in which it is correctly identified do not coincide; this generates the introduction of a delay in the game, albeit content. 3.3.4 Slide down detection Figure 3.12: Sequence of actions in a squat movement Figure 3.13: Squat movement as seen by the motion cameras and the FSRs Thanks to the jump detection feature described in the previous section, the player is now able to avoid most of the obstacles that can appear within the game world. However, for all those who have played at least once an infinite running game, it should be known that some obstacles can be avoided only by passing under them rather than above. Even for our game it is essential to have this ability, so that obstacles such as chandeliers or spikes traps can be successfully avoided. When playing with a computer, the ability to slide under an obstacles is gener- ally triggered by pressing the down arrow key. Since in our game the user himself is considered to be the controller, he is the one supposed to make a downwards movement, so that this can be mapped to the virtual character. As depicted in Figure 3.12, this movement consists of three main phases: the player, in a standing position, performs a downward movement by bending his knees, and then subse- quently executes an upwards movement so that to return in the starting position. This sequence of movements is very similar to the one followed during a jump: in this case, however, the player, during the upward movement, does not exert a force 26
  • 35. 3 – Infinite Runner strong enough in order to overcome the force of gravity and, as a consequence, his feet keep making contact with the floor for all the time. This type of movement is basically the one executed during the squat exercise9 . As already done for the jump detection system, in order to understand how to correctly identify when the player is trying to trigger the slide down ability of the virtual character by making a downward movement, a small experiment was run during which a participant was asked to execute within the CAVE the sequence of movements described just above. Throughout all the test, the data coming from the motion cameras and the FSR sensors have been recorded in order to then be able to graphically represent them. In Figure 3.13 it is shown a representation of such data. By carefully analyzing this plot, it is possible to define several key points of the movement: • A: At the beginning the participant is standing stationary in an upright posi- tion. As a result, the force applied on the floor is constant, while the position received from the motion cameras matches the height of the user. • A-B: The participant starts relaxing his leg and hip muscles, allowing in this way his knees and hips to bend under the effect of the force of gravity. This cause a drop in the force applied on the floor, while the head position keeps remaining for few moments unchanged. • B-C: Free fall phase, during which the participant starts executing a down- ward movement, leaving the sole force of gravity to act on him. This cause a reduction of both the force applied on the floor and the head position value. • C-D: The participant exerts a force on the floor so that to slow down the fall. However, he keeps moving downward, causing a reduction in the head height, until he will eventually assume a crouched position. • D-E: During all this phase the participant remains stationary in a crouched position. The head position is constant, while the force sensed by the FSR sensors keeps increasing in order to assume a value similar to the initial one. • E-F: Boost phase, during which the participant starts doing an upward move- ment by exerting a force on the floor, so that to return in a standing position; in G you have the peak of this force. Eventually, both the force applied on the floor and the head position will return to have the values recorded in the istant A. 9 https://en.wikipedia.org/wiki/Squat_(exercise) 27
  • 36. 3 – Infinite Runner After defining the details of the dynamics of the movement, it was decided to run a further experiment along the lines of that carried out for the jump detection. In particular, a participant was asked to execute some jumps and squats while moving within the CAVE; this was done in order to be sure that the two movements can be easily distinguished from one to another, and, more importantly, to identify some features that allow us to correctly detect when the participant is in a crouched position (so that to trigger the slide down ability of the virtual character). The data recorded during all the experiment can be observed in Figure 3.14. In the upper figure we have for each frame the value captured by the motion cameras, while the lower one shows for each frame the sum of the values sensed by any FSR sensor. In both plots it was highlighted the moments in which the participant has performed either a jump (light blue) or a squat (light gray). Figure 3.14: Capture data of a participant executing different movements on the haptic floor 28
  • 37. 3 – Infinite Runner The first thing to notice is that the two movements present totally different characteristics: considering the value received by the motion cameras, during the ascending phase of the jump this value results to be much above the normal one (i.e., the participant height); on the contrary, during the execution of a squat, it assumes a value much below the normal one. Considering instead the FSR sensors data, the execution of a jump brings to a sudden reduction of the sum of the values sensed by them; as explained in the previous section, this is the main characteristic on which the whole jump recognition system was based. As for the squat movement, instead, the haptic floor does not provide any significant information. Analyzing the plot, it is possible to observe how the data contained in the highlighted areas in blue have a similar pattern to the one that occurs when the participant is simply walking within the CAVE (non- highlighted areas); these areas, in fact, do not have any values much greater than the normal one (i.e., the force sensed while the participant is stationary on the floor), nor much smaller. This means that the data coming from the floor cannot be used to effectively detect the desired movement. For this reason, it was decided to base the slide down recognition system only on the motion tracking data. Looking closely at the plot in question, it is possible to observe that the execution of a squat results in a significant reduction of the received value; as long as the participant remains in a squatting position, his height turns out to be much smaller than the normal one, or the one associated with a jump. Starting from this observation and exploiting the idea already introduced in the jump detection system, it was decided to define a certain threshold also for the identification of the squat movement: after that the game has started, a script defined within Unity begins to monitor the participant’s height. As long as it stays above the threshold value, it is assumed that the player is simply walking within the CAVE, or that he is executing a jump; however, as soon as the input value goes below the threshold, and remains in this state for several frames, it is then assumed that the player is performing a squat. This will lead to the trigger of the slide down ability of the virtual character. However, it should be obvious that the efficiency of such a system all depends on the chosen threshold value; a value very close to the normal one would lead to the identification of many false positives, while a value too small will not allow the correct identification of many squats. In addition, it is also essential to define what is meant as “normal height”, because not all the people are the same height: a threshold value that may be fine for a adult will never work for a child. In order to solve these problems, it was decided to proceed as follows: • Before executing the game, the player, wearing the helmet with the markers on it, is asked to stand stationary in an upright position within the CAVE. • Just after the start of the game, the procedure with the duty to monitor the 29
  • 38. 3 – Infinite Runner motion tracking data saves the value received as the first frame. This will be considered to be the height of the player, i.e., the “normal height”. • Finally, the threshold value is computed starting from this value: squat_threshold = normal_height × 0.77 (3.1) In other words, the threshold value is equal to 77%10 of the normal height. Taking as an example the data showed in the plot, the height of the participant is approximately equal to 1.65m; as a consequence, the threshold value associated to him will be: 1.65m×0.77 = 1.27m. The system was widely tested by the SRL members. In general all were satisfied, calling it very intuitive and easy to understand; the adaptive threshold value has guaranteed the achievement of excellent results for all participants, even when the executed movements were not so pronounced. The system has proven to be able to correctly distinguish whenever a participant was executing a jump, a squat or simply moving within the CAVE. The system was even tested by one laboratory members’ little son (around 8 years old) . The small child understood very quickly how to play the game, and he had a lot of fun during the experience. Even in his case, the system worked as expected, distinguishing efficiently all his movements. 3.3.5 Haptic feedback With the introduction of the slide down detection system, it is now possible to play Infinity Runner within the CAVE environment. However, in order for the player to have the most immersive experience possible, we still want to introduce a haptic feedback that could enhance the game play of our VR application. After performing an in-depth analysis of the game’s features, it was decided to include a haptic effect whenever a specific game event was encountered. In particular, the game events that have been taken into consideration are the collection of a coin and hit to an obstacle, for which we want to take advantage of the functionality implemented in this regard within NIW server, i.e., the one that allows us to trigger a neutral sound at will from either some specific tiles or from all the 36 tiles at once. To exploit this function, we have to execute the following operations: • First thing first, as soon as the game is started, we need to send a series of OSC messages to NIW server, one for each game event that we want to consider; in our case, then, we need to send two messages. Each of these 10 Empirically chosen. 30
  • 39. 3 – Infinite Runner messages, addressed towards a very specific address pattern, will contain two arguments: a string, representing the type of game event, and a number, specifying the intensity of the feedback that we want to associate to that event (the greater the number, the stronger the feedback). In the specific case of Infinite Runner, the two messages will contain the pairs (“Coin”, 2) and (“Obstacle”, 5), respectively; anyway, it is possible to define as many game events as wanted. The server, analyzing the address pattern of each received packet, will be able to correctly interpret them, going to add in a suitable data structure the data contained therein. • After this preliminary operation, it is possible to start playing. Whenever the player collects a coin or collides with an obstacle, an OSC message will be sent whose parameters and address pattern depend on the type of the feedback that we want to be generated: – If we want to trigger the haptic effect from just a specific tile, the message will contain three parameters, i.e., the string “Coin” or “Obstacle” (de- pending on the event that occurred), and two integer numbers, indicating the x and y coordinates of the tile. – Instead, if we want the haptic feedback to be generated by all the tiles, the message will only contain the string “Coin” or “Obstacle”. – The last available option, which is the one used by Infinite Runner, allows us to generate a haptic effect from the tiles on which the player is currently located (this information is on the server itself). Also in this case, the only argument contained in the packet will be the string “Coin” or “Obstacle” as in the previous case, but the adress pattern is different. Since packets have different address patterns depending on the service re- quested, NIW server will be able to distinguish and interpret them in an appropriate manner. In any case, regardless of the type of the packet, the first operation that is always performed by the server is to check whether the string contained as first parameter is within the data structure previously cre- ated and, if so, retrieve the volume level associated to that specific game event. This number will be sent via OSC message to the Max/MSP patches running on the different Mac minis in charge of the rows management cointaining the tiles from which we want to receive the haptic effect. In addition to the haptic effects generated as a result of game events, we also tried to exploit the capabilities offered by the system in providing different haptic textures (see Chapter 2 for the details). Since Infinite Runner takes place in two different virtual worlds, i.e., inside and outside of a castle, it was decided to associate 31
  • 40. 3 – Infinite Runner a different haptic texture to the floor depending on whether the player is on the indoor or outdoor platform; in particular, the latter correspond to none (i.e., no haptic texture) and ice texture, respectively. The haptic texture associated to each tile is dynamically determined based on the virtual world by a raycast method. Here, the scene graph defined in Section 3.3.1 is used. In the parent node, which represents the CAVE, 36 child objects are instantiated at the position of each haptic tile. The position of each child object is lifted by a constant height h, and a ray is casted downwards. The first object hit by the ray, which is most likely a virtual ground plane, determines the haptic texture of the tile. If the hit object is the outside platform, the haptic texture associated to the tile is switched to ice; otherwise, it is set to none. The status associated to every tile is stored in a 6x6 matrix, which contains for each element the word “ice” or “none”. If the values contained within this data structure are different from the previous ones, this will mean that at least one haptic texture has changed. As a consequence, all the 36 values are inserted in an OSC message and sent to the NIW server, which will be responsible for notifying all the Max/MSP patches via OSC messages regarding what haptic texture should be associated to each tile. It was decided to always send all 36 values in order to optimize the number of sent packets: NIW server will always receive a single packet, regardless of the number of tiles that have a new associated haptic texture. 3.4 Experiments and Results The experiment is intended to explore the role that haptic feedback can play in enhancing a player’s experience and performance in a video game, and what elements of the game may benefit the most from the addition of such feedback. This was done by having participants play the video game implemented in this chapter. As already described in the previous sections, the objective of the game is to collect virtual coins approaching the character while avoiding obstacles. The game was played both with and without the addition of haptic feedback delivered to the participants’ feet via the floor. This feedback was provided both in response to user movement around the floor, so as to generate the feeling of a virtual ground texture, and in response to collisions with objects in virtual world. In order not to make the game too much challenging, for this first phase of the experiment it was decided to employ a simplified version of the game in which participants only had to move to the right or to the left so as to collect coins or avoid obstacles. 32
  • 41. 3 – Infinite Runner 3.4.1 Methology Measures Within the experiment, both quantitative and qualitative data were collected so as to determine any change in the user experience due to the introduction of haptic feedback through the vibrotactile floor. In particular, participants’ performances were examined by collecting in-game metrics, such as number of collected coins and avoided obstacles, and, moreover, participants were evaluated using physiological and psychological information. It is in fact possible to detect participants’ emo- tional states at a certain time by observing their biological data such as the skin conductance, the heart rate, etc. [38]. Participant actions were also videotaped to see their behavior to different game events. Regarding psychological measurements, participants were requested to complete three different questionnaires. All the de- signed questionnaires can be found in Appendix A. Procedure The procedure was as follows: 1. Upon arrival, participants were given the consent form to read. 2. After agreeing to participate in the experiment, participants were shown a small video presentation11 on how to stand on the floor and how to play the video game. 3. Participants were asked to complete a pre-test questionnaire so as to under- stand their background. 4. Participants were asked to wear some small biosensors on their fingers and a band around the abdomen to collect physiological data during the experi- ment, consisting of body temperature, heart rate, skin conductance and respi- ration rate. The biosignal sensors are medical grade devices manufactured by Thought Technology12 and were wiped clean with a disinfectant between uses. 5. Participants were asked to wear a small headset on their heads for motion tracking purposes. 6. Participants were asked to stand on the floor so that the game could begin. 11 The video can be seen at https://vimeo.com/152045111 12 Detailed information on the used system can be found at http://thoughttechnology.com/ index.php/complete-systems/biofeedback-advanced-system.html 33
  • 42. 3 – Infinite Runner 7. Participants were asked to play four sessions, with each single one lasting 2 minutes. A repeated measures design was employed with two levels for each factor, with our factors being Haptic_Audio or NoHaptic_Audio. The order with which factors were presented to each participant was randomly chosen so as to minimize any learning curve effect. 8. Between each session, participants were asked to rest for one minute and com- plete a post-session questionnaire. 9. After playing all four game sessions, participants were asked to complete the post-test questionnaire. Subjects Eight male subjects between the ages of 19 and 28 took part in the experiment. All participants reported to have previously played to an endless running game; three participants (No. 1, 3 and 7) stated that they play video games for 0-5 hours/week, three other (No. 4, 5 and 8) for 5-10 hours/week and the last two (No. 2 and 6) for 10-15 hours/week. Six of them said to use the computer as their preferred video game platform, while the other two (No. 4 and 8) said to prefer home video game consoles such as PlayStation 3 (PS3), PS4 and Nintendo Wii. Since the experiment was fairly brief and involved play of a simple yet fun game, no monetary compensation was given to any participant. Although it was not originally planned, participants were divided into two differ- ent groups: one group consisting of participant No. 1, 2 and 3, and the second one consisting of the remaining five participants. This decision was taken in response to the comments made by the first three participants, who complained about the fact that in order to collect coins and avoid obstacles it was faster and easier to just move their heads instead of walking around the CAVE; in addition, they also observed that the vibrations generated from the floor were too subtle. For these reasons, for all other participants the following changes were made: • Disable audio during haptic sessions so as to have Haptic_NoAudio or No- Haptic_Audio for the repeated measures design. • Increase in the intensity of the haptic feedback when the user hits an obstacle. • Track of the player’s body movements, not just those of his head. This was achieved by asking users to put a series of markers on the band around the abdomen used to collect their respiration rate. As a result, the markers on the head are used only for perspective correction purposes. 34
  • 43. 3 – Infinite Runner Figure 3.15: Average collected coins and hit oblstacle rates for each participant, divided for haptic and audio sessions 3.4.2 Results The average rate of collected coins and hit obstacles divided for haptic and audio sessions is summarized in Figure 3.15. Among all the sessions with haptic feedback, the highest achieved rate was 94.09%, while the lowest was 70.31%. As for the sessions with audio, the highest rate was 95.52% and the lowest was 76.20%. The average over all participants and all haptic sessions was 82.29%, with a standard deviation of 7.35%, while for the sessions with audio the rate was 85.88%, with a standard deviation of 6.57%. Regarding the hit obstacles, the highest registered rate 35
  • 44. 3 – Infinite Runner Figure 3.16: Results of the post-session questionnaire for Group#1 (top) and Group#2 (down) for haptic sessions was 14.21%, while the lowest was 0%. Considering instead the audio sessions, the highest rate was 15.26% and the lowest was 2.26%. The overall averages among all participants were equal to 7.25% (haptic sessions) and 6.91% (audio sessions), with a standard deviation of 4.34% and 4.32%, respectively. Figure 3.16 shows the results of the post-session questionnaire for both group #1 and group #2. Participants in group #1 reported having performed better than the ones in group #2, with a greater preference for sessions played with haptic feedback. Group #1 found the game to be less challenging than what was said by the group #2. Anyway, as we just saw from the in-game data both groups have similar results while playing either with or without haptic; in other words, performances were not effected by the modalities with which feedback was provided. Regarding the third question, it is interesting to note how group #1 has showed a slight preference for the Haptic_Audio sessions than the ones played with audio only, while participants 36
  • 45. 3 – Infinite Runner Figure 3.17: Results of the post-test questionnaire in group #2 have much preferred sessions with audio only than the ones with just haptic. From this result we can assume that haptic is a nice addition for enhancing the overall experience, but audio is much more important. Finally, the results related to the post-test questionnaire are depicted in Fig- ure 3.17. First, it is interesting to note that, contrary to what came out from the previous questionnaire analysis, group #2 preferred more than group #1 the addi- tion of haptic effects to the game play. Not only, participants of group #2 also stated that the haptic feedback helped them in collecting coins and avoiding obstacles to a greater extent than that perceived by participants of group #1 (although the an- swers to these two questions have not been very satisfactory for both groups). This difference can be in part attributed to the fact that group #2 received a feedback of greater intensity whenever an obstacle was hit, variation that apparently was well appreciated. One of the most interesting details that popped up from the questionnaire is that group #1 has been the one to favor more the tracking system, which was modified for participants in group #2 after welcoming the received complaints. We are not sure of the reasons behind this result, as we would have expected that group #2 would have been the one to prefer the system; the only certain thing is that not all users had the same conception of what it meant to have the possibility to move freely within a virtual world. For example, some participants tried to avoid obstacles by jumping over them, others tried to collect the coins using their arms, others instead were satisfied to just move left and right as instructed. We should also admit that eight participants are not sufficient at all to draw any comprehensive conclusion; 37
  • 46. 3 – Infinite Runner however, it would be interesting to investigate the matter in-depth in next phase of the experiment. Seven participants stated that the overall experience was not stressful at all; only participant #4 asserted that it was very stressful, and this is why the standard deviation associated to this question for group #2 is really big. Since all the other users answered in the same manner, one plausible explanation of this divergence is that he misunderstood the meaning of the question. All participants felt immersed in the game play, mainly thanks to the potential- ity offered by the CAVE environment. They all said that they would be willing to play again if invited; participants of group #2 were the ones with the most positive answer to this question. As regards the physiological data, their analysis has not led to interesting con- clusions. This is mainly due to the fact that such data were found to be subject to considerable noise due to the physical movements performed by participants. With regard to video recordings, it was noted that participants had different interpreta- tions on what meant to have the freedom to move freely within the CAVE envi- ronment: someone merely moved to the left or to the right with his arms stretched along the body, a participant tried to jump to avoid obstacles although he was told it was not possible to do so, another one even tried to collect coins with his hands. 38
  • 47. Chapter 4 MINIW The vibrotactile floor, combined with an immersive environment such as the CAVE, enables the development of multimodal video games in all a new way. As demon- strated in the previous chapter, it is possible to use the haptic floor not only to give the user a haptic feedback as a consequence of an event happened in the game, but it is also possible to use it as an interface in order to allow the user to interact with the virtual world. However, the environment used to develop such an experience has the big draw- back to be really expensive and to occupy a lot of space. As a consequence, it is practically impossible for a normal user to use this technology directly in his home. Motivated by the idea of providing a haptic experience using tools more accessible than a CAVE, we decided to exploit the knowledge acquired and to apply it on a 2×2 tile floor platform named MINIW. The objective of our work was to develop something to introduce the general public to the haptic floor technology. So far, people who were interested to try the potentialities offered by the system had to be directly invited in our laboratory to see the projects developed using the haptic floor contained in the CAVE-like environment, as no one had developed anything using MINIW. We demonstrated our work during two big events: the first one was TEDxMontreal1 on November 7th in Montreal, and the second one was Maker Faire Ottawa2 on November 8th in Ottawa. Two experiences were created using MINIW. Their development has required the solution of some intrinsic problems associated with this platform: 1. MINIW has limited dimensions, making too dangerous for a user to walk on it. 1 http://tedxmontreal.com/en/ 2 http://makerfaireottawa.com/ 39
  • 48. 4 – MINIW 2. Tiles are made of plexiglass, and for this reason it is not possible to project anything on them. Even if they were opaque, a projector mount would be needed, which is cumbersome for exhibitions. Due to these limitations, it’s not correct to think of MINIW just as a small version of the haptic floor homed within the CAVE. In the latter, the user can freely move on it and he is totally conscious of his position. Moreover, the haptic feedback generated by each tile can be changed according to the environment that is currently projected on it. In order to show the features offered by MINIW, two projects were developed: • "Magic Tiles": it allows us to demonstrate different haptic textures without the need of projecting anything on the floor. In particular, haptic textures have been associated to the colors of physical tiles. There are four foam tiles with different colors, and each tile has an aluminum tape on the back forming a pattern. When a foam tile is aligned on a haptic tile, this tape shorts 2 of the 4 electrodes placed on each plexiglass tile. A microcontroller keeps monitoring all the electrodes identifying any changes, so that is is possible to select the desired haptic texture for each tile. • "Wakaduck": it is a video game inspired by a duck shooting game, but has a unique control. A virtual can with a spring attached is placed on MINIW, and a user steps on the can, aim at a duck by controlling the pressure and direction, and release to shoot a can. A detailed description about those two projects can be found in the next sections. 4.1 System Description As depicted in Figure 4.1, the system is composed of three macro components: MINIW and two computers (Mac Mini and PC). MINIW is the first implemented prototype of the haptic floor. It consists of three main elements: 1. Arduino (FSRs): unlike the NIW, which uses a Gluion per row to read FSR data, a single microcontroller is responsible to receive the data coming from all the 16 FSRs sensors placed under each tile. All the data is then sent to the server placed inside Mac Mini by using an USB serial connection. This operation is done 30 times/s. 2. Arduino (Electrodes): This microcontroller, detects connection of the elec- trodes located above each tile, and send it to the server by using another USB serial connection. Details can be found in Section 4.2.1. 40
  • 49. 4 – MINIW Figure 4.1: System architecture 3. Actuators: an actuator is attached to each tile. They are used to generate the haptic feedback. It is the server that decides which haptic texture should be rendered according to the information provided by the two microcontrollers described just above. The data coming from the two microcontrollers is received and processed by a server code written in C++ hosted on a Mac Mini computer. The server code consists of three different threads, each of which with a different task: the first two are in charge of receiving data from the two microcontrollers, while the third one processes it. This last thread is the one that defines the texture feedback that each tile should have based on the information received from the Arduino connected to the electrodes, and the one that contains the logic on how to interpret the FSR values in order to allow MINIW to be used as an interface to play Wakaduck. Once that the textures are defined for all the tiles, the server sends an OSC message to the Max/MSP patch that will synthesize the desired feedbacks anytime someone will step on a tile. OSC messages are also sent to a client computer running Unity in order to notify it that something is happening on the floor. Based on that information the game status will be updated accordingly. A full description of how the server talks together with the client is given in Section 4.3.3. 4.2 Magic Tiles To augment a floor or to synthesize a virtual floor, a display is preferred in order to give the user both visual and haptic feedback. For example, projection, an LCD screen, or a head-mounted display can be used, and both visual and haptic textures 41
  • 50. 4 – MINIW of the floor can be dynamically changed with time. However, all these solutions present some issues: • Projection can be occluded by participant’s body standing on the floor. • It is difficult to design an LCD display that can be stepped on, without con- sidering the fact that such displays are really expensive. • As explained in the introduction of this Chapter, it is very dangerous to use head-mounted displays such as the Oculus Rift since the floor is really small and the user may fall down. In order to avoid these problems, our goal was to introduce another layer on top of the plexiglass tiles that could somehow represent the haptic texture. The first idea was to use a smartphone in order to create an augmented reality experience. The user, standing on MINIW, would have been able to see the virtual texture associated with a tile just placing the phone camera on it. So, for example, if a tile was assigned the ice texture, the user would have seen an ice texture associated for that tile; by stepping on it, the user would have felt the ice haptic feedback and seen the ice texture cracking on the phone. Moreover, the user would have been able to dynamically change the visual feedback (and consequently the haptic one) by using the smartphone. However, after running some preliminary tests we noticed that there was not a strong connection between visual and haptic feedback. The user was mainly concentrated in looking at the phone instead of feeling the haptic feedback. Another idea was to create some drawings using some sheets of paper to put on top of the tiles. On each sheet there would have been drawn an element that would have allowed the user to associate that drawing to a particular texture. Following the example above, for the ice texture there would have been a drawing with an ice cube drawn on it. The user would have been able to place these sheets at will on the tiles, so that to create his own personal haptic floor. Using a Kinect and some image analysis techniques, it would have been possible to identify the item drawn on a particular sheet in order to select the appropriate tactile feedback for the tile underneath it. This solution, however, has the big problem that the user would have not been able to step on the drawings placed on top of the tiles as they would have been ruined. At most, the user would have been able to feel the haptic feedback by pressing on a tile using his hands. This however would not have made much sense, since the objective was to create a haptic floor, and not a haptic surface. From here it was born the idea of replacing the paper sheets with something which the user could step on, while maintaining the idea of having a distinctive element so as to be able to distinguish the object placed on top of each tile . We 42
  • 51. 4 – MINIW a) Foam Tiles b) MINIW with foam tiles Figure 4.2: MINIW thought then of using the classic colored foam tiles (Figure 4.2) with which toddlers like to play. With this approach, the differentiating element between the foam tiles is the color of the tiles themselves, and not something that is drawn on top of them. Moreover, the user is able to place these foam tiles on top of the plexiglass ones, with the freedom to step on them. Kinect is still used in order to monitor the haptic floor and adapt the haptic feedback for each tile based on the foam tile placed by the user on top of it. By swapping interlocking foam tiles with different colors, haptic texture changes accordingly. The user can choose between red, blue, light blue and yellow, and they represent crushing can, water, ice and sand textures, respectively. By actively swapping the tiles, users are expected to recognize the change in haptic feedback. This solution, however, present a problem, that is where to place the Kinect in order to correctly detect the colors of the different foam tiles placed on the haptic floor. Moreover, it is important to keep into account that with the user staying on top of the floor, it is hard to recognize the foam tiles’ colors due to occlusion problems. The first idea to solve this problem was to place the Kinect underneath the plexiglass tiles. This solution, however, was unfeasible since there is no room inside MINIW to place a Kinect: most of the space is occupied by the actuators and all the wires. We decided then to substitute the Kinect with a normal webcam, that could have been easily fitted between all the actuators. But also this solution revealed to be unachievable: inside MINIW it is really dark, and illumination play an important role when applying image analysis techniques. Moreover, the field of view of most of the webcams is not so wide to allow to monitor all the four tiles at once. It would have been necessary therefore to add some sort of illumination, and use many 43