This document discusses two systems that use gestural interfaces for 3D navigation of maps using the Wiimote and Kinect controllers. The systems, called Wing and King, allow natural 3D navigation without using traditional point-and-click interfaces. An empirical user study evaluated how the degree of body involvement with each controller affected the user experience. Results showed that gestural interfaces can immerse users in a dynamic 3D experience and move interaction beyond the novice level quickly by exploiting physical movement.
Model-based Research in Human-Computer Interaction (HCI): Keynote at Mensch u...Ed Chi
- The document discusses human-computer interaction (HCI) research conducted at Xerox PARC in the 1970s and 1980s.
- Early contributions came from computer scientists interested in changing how people interact with information and psychologists studying the implications.
- Research established HCI as a science by adopting psychological methods and building an HCI techniques science.
EXPERIMEDIA: Experiments in live social and networked mediaexperimedia
The document describes a project called EXPERIMEDIA that aims to conduct experiments in live social and networked media using future internet technologies. The project will explore new forms of social interaction and experiences enabled by future media internet. It will engage users from diverse communities through the research and development cycle. A variety of experiments are proposed, including personalized entertainment, social communities using 3D environments, capturing real world environments in 3D, and ensuring perceptual congruity between real and virtual worlds. The project calls for additional partners to join and conduct experiments starting in May 2012 and May 2013. It concludes that the SMAP workshop is well suited to join the project and conduct relevant experiments given its focus on media, semantics and personalization.
Introducing a simple way of programing robots, hardware in general and various approaches developed by Microsoft Research Cambridge. The talk was held at the MSRC Christmas Lecture 2005.
Exploring “live” Social and Networked Interaction with the Future Media Inter...experimedia
The document discusses the EXPERIMEDIA project which aims to accelerate research on innovative Future Media Internet services through testbeds. The testbeds will support experimentation of new forms of social interaction and experiences in both online and real-world communities. The project will engage users from diverse cultures through its research and development cycles and provide insights into how Future Media Internet systems impact their target ecosystems. It will be carried out by an 11-partner consortium over 3 years with a budget of 6.7 million Euros, 4.9 million of which is funded by the European Commission.
Design thinking for next dacade will center around three interesting paradigms. Specially as lot of information will be emitted by various devices and the complexity increases, we need to evolve a way to simplify the complex. This talk given at IIT Kanpur, attempts to figure out how we would be designing simpler intelligent interfaces.
Hands and Speech in Space: Multimodal Input for Augmented Reality Mark Billinghurst
A keynote talk given by Mark Billinghurst at the ICMI 2013 conference, December 12th 2013. The talk is about how to use speech and gesture interaction with Augmented Reality interfaces.
Presentation at the Serious Games Institute October 27, 2009 by Ron Edwards on the nature of work, drivers of collaboration and need for better tools, and how virtual worlds are an optimum fit for enterprise collaboration. Ron is the CEO of Ambient Performance in London.
Model-based Research in Human-Computer Interaction (HCI): Keynote at Mensch u...Ed Chi
- The document discusses human-computer interaction (HCI) research conducted at Xerox PARC in the 1970s and 1980s.
- Early contributions came from computer scientists interested in changing how people interact with information and psychologists studying the implications.
- Research established HCI as a science by adopting psychological methods and building an HCI techniques science.
EXPERIMEDIA: Experiments in live social and networked mediaexperimedia
The document describes a project called EXPERIMEDIA that aims to conduct experiments in live social and networked media using future internet technologies. The project will explore new forms of social interaction and experiences enabled by future media internet. It will engage users from diverse communities through the research and development cycle. A variety of experiments are proposed, including personalized entertainment, social communities using 3D environments, capturing real world environments in 3D, and ensuring perceptual congruity between real and virtual worlds. The project calls for additional partners to join and conduct experiments starting in May 2012 and May 2013. It concludes that the SMAP workshop is well suited to join the project and conduct relevant experiments given its focus on media, semantics and personalization.
Introducing a simple way of programing robots, hardware in general and various approaches developed by Microsoft Research Cambridge. The talk was held at the MSRC Christmas Lecture 2005.
Exploring “live” Social and Networked Interaction with the Future Media Inter...experimedia
The document discusses the EXPERIMEDIA project which aims to accelerate research on innovative Future Media Internet services through testbeds. The testbeds will support experimentation of new forms of social interaction and experiences in both online and real-world communities. The project will engage users from diverse cultures through its research and development cycles and provide insights into how Future Media Internet systems impact their target ecosystems. It will be carried out by an 11-partner consortium over 3 years with a budget of 6.7 million Euros, 4.9 million of which is funded by the European Commission.
Design thinking for next dacade will center around three interesting paradigms. Specially as lot of information will be emitted by various devices and the complexity increases, we need to evolve a way to simplify the complex. This talk given at IIT Kanpur, attempts to figure out how we would be designing simpler intelligent interfaces.
Hands and Speech in Space: Multimodal Input for Augmented Reality Mark Billinghurst
A keynote talk given by Mark Billinghurst at the ICMI 2013 conference, December 12th 2013. The talk is about how to use speech and gesture interaction with Augmented Reality interfaces.
Presentation at the Serious Games Institute October 27, 2009 by Ron Edwards on the nature of work, drivers of collaboration and need for better tools, and how virtual worlds are an optimum fit for enterprise collaboration. Ron is the CEO of Ambient Performance in London.
The document summarizes the EXPERIMEDIA project, which aims to accelerate research on innovative Future Media Internet technologies through testbeds. The testbeds will support experiments exploring new forms of social interaction and experience in both online and real-world communities. This will be conducted through real-world and large-scale trials of FI technologies. The project involves 11 partners from 8 countries and has a budget of over 6 million Euros to support experiments through open calls and live events.
Virtual reality is an artificial environment that is created with software and presented to users in a way that makes them feel like they are experiencing a real environment. The document discusses the concept of virtual reality, why it is needed, and how virtual reality systems work. It provides examples of virtual reality being used for entertainment, medical applications like surgery training, manufacturing design, and education/training through simulators. The key components of a virtual reality system include input, processing, rendering, and a virtual world database to create immersive or augmented reality experiences.
IRJET-Space Invaders: An Educational Game in Virtual RealityIRJET Journal
This document describes the development of an educational virtual reality game called "Space Invaders" that teaches users about the eight planets in the solar system. The game was created using technologies like HTML, JavaScript, JSON, and React on an A-Frames platform to work on VR headsets, computers, and mobile devices. In the game, users must defeat aliens on each planet level to move on to learning facts about that planet and answering a multiple choice quiz question. The game was designed to be an engaging educational tool that combines virtual reality, gaming, and space science to improve learning outcomes.
Peter Morville gave a presentation on ubiquitous information architecture and cross-channel strategies. He discussed how fragmentation across sites and platforms creates usability issues for users. He advocated for a unified "one library" approach and mapping the customer journey across channels to improve findability. Morville also covered designing for continuity across devices and contexts like location. The talk emphasized taking a holistic view of the user experience across the physical and digital to create coherent, connected experiences.
This document describes an augmented reality system called LooknLearn that allows users to leave "video streams" associated with locations in physical spaces. The system was developed to enhance social interactivity by allowing users, rather than just designers, to create content. It uses GPS rather than physical tags to locate streams. A video stream consists of a primary video layered with additional "plaited" videos that shift in and out of view. The document discusses the technical capabilities and design considerations of the system, including how to define, embed, and link video streams to allow nonlinear navigation between related content.
This document summarizes an augmented reality system called LooknLearn that allows users to leave "video streams" consisting of video, movies, or animations at locations outdoors. Using an authoring toolkit, individuals can create and place these streams. When other users navigate to the location of a stream using GPS and a compass, they can view the associated video content. Streams can also be linked together so viewers are directed from one stream to related streams. The system aims to enhance spaces by allowing users to socially create and share location-based multimedia content without requiring physical tags or modifications to the environment.
The document describes three venues that will participate in the EXPERIMEDIA project: the CAR sports training center in Catalonia, Spain, the ski resort of Schladming, Austria, and the Foundation of the Hellenic World cultural center in Greece. Each venue is outlined with details about its role, stakeholders, benefits, infrastructure, and content for participatory and educational multimedia experiences.
Software Agents in Support of Human Argument Mapping
Abstract. This paper reports progress in realizing human-agent argumentation, which we argue will be part of future Computer-Supported Collaborative Argumentation (CSCA) tools. With a particular interest in argument mapping, we present two investigations demonstrating how a particular agent-oriented language and architecture can augment CSCA: (i) the use of the IBIS formalism enabling Brahms agents to simulate argumentation, and (ii) the extension of the Compendium tool by integrating it with Brahms agents tasked with detecting related discourse elsewhere.
Keywords. Argument Mapping, IBIS, Compendium, Brahms, Multi-Agent Systems
3rd International Conference on Computational Modelling of Argument
Desenzano del Garda, Italy, 8-10 Sept. 2010
Gesture Gaming on the World Wide Web Using an Ordinary Web CameraIJERD Editor
- Gesture gaming is a method by which users having a laptop/pc/x-box play games using natural or
bodily gestures. This paper presents a way of playing free flash games on the internet using an ordinary webcam
with the help of open source technologies. Emphasis in human activity recognition is given on the pose
estimation and the consistency in the pose of the player. These are estimated with the help of an ordinary web
camera having different resolutions from VGA to 20mps. Our work involved giving a 10 second documentary to
the user on how to play a particular game using gestures and what are the various kinds of gestures that can be
performed in front of the system. The initial inputs of the RGB values for the gesture component is obtained by
instructing the user to place his component in a red box in about 10 seconds after the short documentary before
the game is finished. Later the system opens the concerned game on the internet on popular flash game sites like
miniclip, games arcade, GameStop etc and loads the game clicking at various places and brings the state to a
place where the user is to perform only gestures to start playing the game. At any point of time the user can call
off the game by hitting the esc key and the program will release all of the controls and return to the desktop. It
was noted that the results obtained using an ordinary webcam matched that of the Kinect and the users could
relive the gaming experience of the free flash games on the net. Therefore effective in game advertising could
also be achieved thus resulting in a disruptive growth to the advertising firms.
This document discusses the history and components of virtual reality systems. It begins by defining virtual reality as an artificial 3D environment created by computer hardware and software that users can interact with and appears real. The document then summarizes the history of virtual reality from its origins in the 1980s to current applications. It describes the key components of virtual reality systems including head mounted displays, audio units, gloves, and reality engines. It also discusses types of virtual reality systems from non-immersive to fully immersive and how immersion is experienced by users. The document concludes by outlining advantages and disadvantages of virtual reality systems.
Globaltronic has developed an education solutions division to help modernize school systems with new technologies like interactive whiteboards. Their infrared tactile whiteboard system offers several advantages over other technologies, including plug-and-play setup, low power consumption, high accuracy and speed, and the ability to use dry-erase markers. The whiteboards are available in multiple sizes and feature a durable ceramic steel surface. Interactive whiteboards can enhance learning by facilitating communication, motivation critical thinking, and shifting students into a more active role in their education.
This paper outlines the development of a wearable game controller incorporating vibrotacticle haptic feedback that provides a low cost, versatile and intuitive interface for controlling digital games. The device differs from many traditional haptic feedback implementation in that it combines vibrotactile based haptic feedback with gesture based input, thus becoming a two way conduit between the user and the virtual environment. The device is intended to challenge what is considered an “interface” and draws on work in the area of Actor-Network theory to purposefully blur the boundary between man and machine. This allows for a more immersive experience, so rather than making the user feel like they are controlling an aircraft the intuitive interface allows the user to become the aircraft that is controlled by the movements of the user's hand. This device invites playful action and thrill. It bridges new territory on portable and low cost solutions for haptic controllers in a gaming context.
The document discusses the concepts of information architecture and the role of information architects. It provides definitions of information architecture as the structural design of shared information environments and the combination of organization, labeling, search, and navigation systems. It also describes information architecture as the art and science of shaping information products and experiences to support usability and findability. Additionally, it frames information architecture as an emerging discipline focused on bringing principles of design and architecture to the digital landscape.
This document provides information about virtual reality (VR) including its concepts, forms, applications, and devices. It discusses three forms of VR: through-the-window, immersive, and second person. VR applications include perambulation, synthetic experiences, and realization. Key VR devices described are data gloves, head mounted displays, VR chairs, cameras, and sound systems. Basics of the VRML file format and elements are also covered.
This document discusses Kinect programming and gesture-based interaction using the Kinect sensor. It provides an agenda for a workshop that will introduce Kinect and how it works, developing for Kinect, creating a 3D user model and gesture recognition. It also discusses concepts like natural user interfaces, computer vision, applications of computer vision, and resources and tools for Kinect programming.
The document provides an overview of the history of currencies in India, noting that the first currency notes were introduced by the Bank of Bengal in the early 1800s, though they lacked security features, and that the British government later established a monopoly on printing currency under the Paper Currency Act of 1861, introducing several different series of notes up until the modern-looking George V series in 1923, which also included a high-value 10000 denomination note.
Project report (2003) - Using Flash MX Cursor-control component to enhance co...Amir Dotan
1. The document describes a Cursor-control component developed for Macromedia Flash MX to enhance computer interaction for motion-impaired users. It was inspired by studies showing that taking control of the cursor can reduce time for target selection tasks.
2. The component replaces the system cursor with a virtual cursor that it can control. When the virtual cursor detects proximity to a target, it centers on the target and changes shape for easier clicking.
3. Future work includes adding a mechanism to trigger clicks after time delays to assist users who have difficulty clicking targets. The component is intended to make point-and-click tasks easier for people with limited motor control.
This document discusses multimedia authoring tools and techniques. It covers 3D modeling software like 3D Studio Max and how to use texture mapping and animation. It also discusses web page authoring using Dreamweaver and how layers can represent different HTML objects. Automatic authoring of multimedia is discussed, specifically problems with moving from text-based to image-based authoring and managing nodes from legacy documents. Simple animation is demonstrated using a fish sprite moving along a path overlaid on video.
Van der kamp.2011.gaze and voice controlled drawingmrgazer
This document describes a drawing application that is controlled using both gaze and voice inputs. The application allows users to draw various shapes like lines, rectangles, ellipses, and polygons using only their eyes to position the cursor and voice commands to activate the drawing. Previous gaze-based drawing tools required users to dwell their gaze at a location for a period of time to activate drawing, which caused delays and accidental activations. The proposed system aims to improve the user experience by removing the need to dwell gaze and only using gaze for positioning. The drawing application was implemented and evaluated through user trials. The results showed that while gaze and voice offered less control than traditional inputs, participants found it more enjoyable to use.
The document discusses a concept called Project Modai, which explores designing a mobile device interface that can forge an emotional connection between the user and device by understanding the user's needs based on context, having meaningful interactions, and adapting to technological advances in a sustainable way over time through two paradigms representing social and work modes. It aims to address issues with current devices like lack of understanding of user needs, ineffective ways to get a user's attention, meaningless interactions, and fast obsolescence making it hard to form bonds.
This document provides an introduction and overview of a project on vision-based hand gesture recognition. It discusses the motivation for the project and how hand gestures can provide a more natural human-computer interaction compared to traditional input devices like keyboards and mice. The document outlines the objectives of the project, which are to develop a system that can identify specific hand gestures using a webcam and interpret them to control mouse operations on a computer. It also provides an overview of the organization of the project report and the topics that will be discussed in subsequent chapters, such as the literature review, proposed methodology, results, and conclusions.
IRJET- Finger Gesture Recognition Using Linear CameraIRJET Journal
This document describes a system for finger gesture recognition using a linear camera. The system aims to allow users to control basic computer functions through finger gestures as an alternative to using a mouse or keyboard. It works by using image processing techniques on video captured by the linear camera to detect the user's finger movements and map them to cursor movements or actions. The system is broken down into four main stages - skin detection to identify finger regions, finger contour extraction, finger tracking, and gesture recognition to identify gestures and map them to computer functions like play, pause, volume control etc. This vision-based approach allows for contactless control and could help users in situations where mouse or keyboard is unavailable.
The document summarizes the EXPERIMEDIA project, which aims to accelerate research on innovative Future Media Internet technologies through testbeds. The testbeds will support experiments exploring new forms of social interaction and experience in both online and real-world communities. This will be conducted through real-world and large-scale trials of FI technologies. The project involves 11 partners from 8 countries and has a budget of over 6 million Euros to support experiments through open calls and live events.
Virtual reality is an artificial environment that is created with software and presented to users in a way that makes them feel like they are experiencing a real environment. The document discusses the concept of virtual reality, why it is needed, and how virtual reality systems work. It provides examples of virtual reality being used for entertainment, medical applications like surgery training, manufacturing design, and education/training through simulators. The key components of a virtual reality system include input, processing, rendering, and a virtual world database to create immersive or augmented reality experiences.
IRJET-Space Invaders: An Educational Game in Virtual RealityIRJET Journal
This document describes the development of an educational virtual reality game called "Space Invaders" that teaches users about the eight planets in the solar system. The game was created using technologies like HTML, JavaScript, JSON, and React on an A-Frames platform to work on VR headsets, computers, and mobile devices. In the game, users must defeat aliens on each planet level to move on to learning facts about that planet and answering a multiple choice quiz question. The game was designed to be an engaging educational tool that combines virtual reality, gaming, and space science to improve learning outcomes.
Peter Morville gave a presentation on ubiquitous information architecture and cross-channel strategies. He discussed how fragmentation across sites and platforms creates usability issues for users. He advocated for a unified "one library" approach and mapping the customer journey across channels to improve findability. Morville also covered designing for continuity across devices and contexts like location. The talk emphasized taking a holistic view of the user experience across the physical and digital to create coherent, connected experiences.
This document describes an augmented reality system called LooknLearn that allows users to leave "video streams" associated with locations in physical spaces. The system was developed to enhance social interactivity by allowing users, rather than just designers, to create content. It uses GPS rather than physical tags to locate streams. A video stream consists of a primary video layered with additional "plaited" videos that shift in and out of view. The document discusses the technical capabilities and design considerations of the system, including how to define, embed, and link video streams to allow nonlinear navigation between related content.
This document summarizes an augmented reality system called LooknLearn that allows users to leave "video streams" consisting of video, movies, or animations at locations outdoors. Using an authoring toolkit, individuals can create and place these streams. When other users navigate to the location of a stream using GPS and a compass, they can view the associated video content. Streams can also be linked together so viewers are directed from one stream to related streams. The system aims to enhance spaces by allowing users to socially create and share location-based multimedia content without requiring physical tags or modifications to the environment.
The document describes three venues that will participate in the EXPERIMEDIA project: the CAR sports training center in Catalonia, Spain, the ski resort of Schladming, Austria, and the Foundation of the Hellenic World cultural center in Greece. Each venue is outlined with details about its role, stakeholders, benefits, infrastructure, and content for participatory and educational multimedia experiences.
Software Agents in Support of Human Argument Mapping
Abstract. This paper reports progress in realizing human-agent argumentation, which we argue will be part of future Computer-Supported Collaborative Argumentation (CSCA) tools. With a particular interest in argument mapping, we present two investigations demonstrating how a particular agent-oriented language and architecture can augment CSCA: (i) the use of the IBIS formalism enabling Brahms agents to simulate argumentation, and (ii) the extension of the Compendium tool by integrating it with Brahms agents tasked with detecting related discourse elsewhere.
Keywords. Argument Mapping, IBIS, Compendium, Brahms, Multi-Agent Systems
3rd International Conference on Computational Modelling of Argument
Desenzano del Garda, Italy, 8-10 Sept. 2010
Gesture Gaming on the World Wide Web Using an Ordinary Web CameraIJERD Editor
- Gesture gaming is a method by which users having a laptop/pc/x-box play games using natural or
bodily gestures. This paper presents a way of playing free flash games on the internet using an ordinary webcam
with the help of open source technologies. Emphasis in human activity recognition is given on the pose
estimation and the consistency in the pose of the player. These are estimated with the help of an ordinary web
camera having different resolutions from VGA to 20mps. Our work involved giving a 10 second documentary to
the user on how to play a particular game using gestures and what are the various kinds of gestures that can be
performed in front of the system. The initial inputs of the RGB values for the gesture component is obtained by
instructing the user to place his component in a red box in about 10 seconds after the short documentary before
the game is finished. Later the system opens the concerned game on the internet on popular flash game sites like
miniclip, games arcade, GameStop etc and loads the game clicking at various places and brings the state to a
place where the user is to perform only gestures to start playing the game. At any point of time the user can call
off the game by hitting the esc key and the program will release all of the controls and return to the desktop. It
was noted that the results obtained using an ordinary webcam matched that of the Kinect and the users could
relive the gaming experience of the free flash games on the net. Therefore effective in game advertising could
also be achieved thus resulting in a disruptive growth to the advertising firms.
This document discusses the history and components of virtual reality systems. It begins by defining virtual reality as an artificial 3D environment created by computer hardware and software that users can interact with and appears real. The document then summarizes the history of virtual reality from its origins in the 1980s to current applications. It describes the key components of virtual reality systems including head mounted displays, audio units, gloves, and reality engines. It also discusses types of virtual reality systems from non-immersive to fully immersive and how immersion is experienced by users. The document concludes by outlining advantages and disadvantages of virtual reality systems.
Globaltronic has developed an education solutions division to help modernize school systems with new technologies like interactive whiteboards. Their infrared tactile whiteboard system offers several advantages over other technologies, including plug-and-play setup, low power consumption, high accuracy and speed, and the ability to use dry-erase markers. The whiteboards are available in multiple sizes and feature a durable ceramic steel surface. Interactive whiteboards can enhance learning by facilitating communication, motivation critical thinking, and shifting students into a more active role in their education.
This paper outlines the development of a wearable game controller incorporating vibrotacticle haptic feedback that provides a low cost, versatile and intuitive interface for controlling digital games. The device differs from many traditional haptic feedback implementation in that it combines vibrotactile based haptic feedback with gesture based input, thus becoming a two way conduit between the user and the virtual environment. The device is intended to challenge what is considered an “interface” and draws on work in the area of Actor-Network theory to purposefully blur the boundary between man and machine. This allows for a more immersive experience, so rather than making the user feel like they are controlling an aircraft the intuitive interface allows the user to become the aircraft that is controlled by the movements of the user's hand. This device invites playful action and thrill. It bridges new territory on portable and low cost solutions for haptic controllers in a gaming context.
The document discusses the concepts of information architecture and the role of information architects. It provides definitions of information architecture as the structural design of shared information environments and the combination of organization, labeling, search, and navigation systems. It also describes information architecture as the art and science of shaping information products and experiences to support usability and findability. Additionally, it frames information architecture as an emerging discipline focused on bringing principles of design and architecture to the digital landscape.
This document provides information about virtual reality (VR) including its concepts, forms, applications, and devices. It discusses three forms of VR: through-the-window, immersive, and second person. VR applications include perambulation, synthetic experiences, and realization. Key VR devices described are data gloves, head mounted displays, VR chairs, cameras, and sound systems. Basics of the VRML file format and elements are also covered.
This document discusses Kinect programming and gesture-based interaction using the Kinect sensor. It provides an agenda for a workshop that will introduce Kinect and how it works, developing for Kinect, creating a 3D user model and gesture recognition. It also discusses concepts like natural user interfaces, computer vision, applications of computer vision, and resources and tools for Kinect programming.
The document provides an overview of the history of currencies in India, noting that the first currency notes were introduced by the Bank of Bengal in the early 1800s, though they lacked security features, and that the British government later established a monopoly on printing currency under the Paper Currency Act of 1861, introducing several different series of notes up until the modern-looking George V series in 1923, which also included a high-value 10000 denomination note.
Project report (2003) - Using Flash MX Cursor-control component to enhance co...Amir Dotan
1. The document describes a Cursor-control component developed for Macromedia Flash MX to enhance computer interaction for motion-impaired users. It was inspired by studies showing that taking control of the cursor can reduce time for target selection tasks.
2. The component replaces the system cursor with a virtual cursor that it can control. When the virtual cursor detects proximity to a target, it centers on the target and changes shape for easier clicking.
3. Future work includes adding a mechanism to trigger clicks after time delays to assist users who have difficulty clicking targets. The component is intended to make point-and-click tasks easier for people with limited motor control.
This document discusses multimedia authoring tools and techniques. It covers 3D modeling software like 3D Studio Max and how to use texture mapping and animation. It also discusses web page authoring using Dreamweaver and how layers can represent different HTML objects. Automatic authoring of multimedia is discussed, specifically problems with moving from text-based to image-based authoring and managing nodes from legacy documents. Simple animation is demonstrated using a fish sprite moving along a path overlaid on video.
Van der kamp.2011.gaze and voice controlled drawingmrgazer
This document describes a drawing application that is controlled using both gaze and voice inputs. The application allows users to draw various shapes like lines, rectangles, ellipses, and polygons using only their eyes to position the cursor and voice commands to activate the drawing. Previous gaze-based drawing tools required users to dwell their gaze at a location for a period of time to activate drawing, which caused delays and accidental activations. The proposed system aims to improve the user experience by removing the need to dwell gaze and only using gaze for positioning. The drawing application was implemented and evaluated through user trials. The results showed that while gaze and voice offered less control than traditional inputs, participants found it more enjoyable to use.
The document discusses a concept called Project Modai, which explores designing a mobile device interface that can forge an emotional connection between the user and device by understanding the user's needs based on context, having meaningful interactions, and adapting to technological advances in a sustainable way over time through two paradigms representing social and work modes. It aims to address issues with current devices like lack of understanding of user needs, ineffective ways to get a user's attention, meaningless interactions, and fast obsolescence making it hard to form bonds.
This document provides an introduction and overview of a project on vision-based hand gesture recognition. It discusses the motivation for the project and how hand gestures can provide a more natural human-computer interaction compared to traditional input devices like keyboards and mice. The document outlines the objectives of the project, which are to develop a system that can identify specific hand gestures using a webcam and interpret them to control mouse operations on a computer. It also provides an overview of the organization of the project report and the topics that will be discussed in subsequent chapters, such as the literature review, proposed methodology, results, and conclusions.
IRJET- Finger Gesture Recognition Using Linear CameraIRJET Journal
This document describes a system for finger gesture recognition using a linear camera. The system aims to allow users to control basic computer functions through finger gestures as an alternative to using a mouse or keyboard. It works by using image processing techniques on video captured by the linear camera to detect the user's finger movements and map them to cursor movements or actions. The system is broken down into four main stages - skin detection to identify finger regions, finger contour extraction, finger tracking, and gesture recognition to identify gestures and map them to computer functions like play, pause, volume control etc. This vision-based approach allows for contactless control and could help users in situations where mouse or keyboard is unavailable.
Real time hand gesture recognition system for dynamic applicationsijujournal
Virtual environments have always been considered as a means for more visceral and efficient human computer interaction by a diversified range of applications. The spectrum of applications includes analysis of complex scientific data, medical training, military simulation, phobia therapy and virtual prototyping.
Evolution of ubiquitous computing, current user interaction approaches with keyboard, mouse and pen are
not sufficient for the still widening spectrum of Human computer interaction. Gloves and sensor based trackers are unwieldy, constraining and uncomfortable to use. Due to the limitation of these devices the useable command set based diligences is also limited. Direct use of hands as an input device is an
innovative method for providing natural Human Computer Interaction which has its inheritance from textbased interfaces through 2D graphical-based interfaces, multimedia-supported interfaces, to full-fledged multi-participant Virtual Environment (VE) systems. Conceiving a future era of human-computer
interaction with the implementations of 3D application where the user may be able to move and rotate objects simply by moving and rotating his hand - all without help of any input device.
Real time hand gesture recognition system for dynamic applicationsijujournal
Virtual environments have always been considered as a means for more visceral and efficient human computer interaction by a diversified range of applications. The spectrum of applications includes analysis of complex scientific data, medical training, military simulation, phobia therapy and virtual prototyping. Evolution of ubiquitous computing, current user interaction approaches with keyboard, mouse and pen are not sufficient for the still widening spectrum of Human computer interaction. Gloves and sensor based trackers are unwieldy, constraining and uncomfortable to use. Due to the limitation of these devices the useable command set based diligences is also limited. Direct use of hands as an input device is an innovative method for providing natural Human Computer Interaction which has its inheritance from textbased interfaces through 2D graphical-based interfaces, multimedia supported interfaces, to full-fledged multi-participant Virtual Environment (VE) systems. Conceiving a future era of human-computer interaction with the implementations of 3D application where the user may be able to move and rotate objects simply by moving and rotating his hand - all without help of any input device. The research effort centralizes on the efforts of implementing an application that employs computer vision algorithms and gesture recognition techniques which in turn results in developing a low cost interface device for interacting with objects in virtual environment using hand gestures. The prototype architecture of the application comprises of a central computational module that applies the camshift technique for tracking of hands and its gestures. Haar like technique has been utilized as a classifier that is creditworthy for locating hand position and classifying gesture. The patterning of gestures has been done for recognition by mapping the number of defects that is formed in the hand with the assigned gestures. The virtual objects are produced using Open GL library. This hand gesture recognition technique aims to substitute the use of mouse for interaction with the virtual objects. This will be useful to promote controlling applications like virtual games, browsing images etc in virtual environment using hand gestures.
WorldKit: Rapid and Easy Creation of Ad-hoc Interactive
Applications on Everyday Surfaces.
Instant access to computing, when and where we need it,
has long been one of the aims of research areas such as
ubiquitous computing. In this paper, we describe the
WorldKit system, which makes use of a paired depth camera
and projector to make ordinary surfaces instantly interactive.
Using this system, touch-based interactivity can,
without prior calibration, be placed on nearly any unmodified
surface literally with a wave of the hand, as can other
new forms of sensed interaction. From a user perspective,
such interfaces are easy enough to instantiate that they
could, if desired, be recreated or modified “each time we sat
down” by “painting” them next to us. From the programmer’s
perspective, our system encapsulates these capabilities
in a simple set of abstractions that make the creation of
interfaces quick and easy. Further, it is extensible to new,
custom interactors in a way that closely mimics conventional
2D graphical user interfaces, hiding much of the
complexity of working in this new domain. We detail the
hardware and software implementation of our system, and
several example applications built using the library.
The document describes a system for 3D modeling using hand gestures as input. It uses a vision-based tracking system to recognize hand gestures without any instruments attached to the hands. The system supports basic modeling tasks like selection, translation, rotation, and scaling of 3D objects using just five static hand gestures. Visual feedback is provided to help users perceive interactions. The goal is to provide an intuitive interface for 3D modeling that requires little or no training.
This paper advocates for a new type of augmented reality (AR) interface called Tangible AR. Tangible AR interfaces combine the enhanced display possibilities of AR with intuitive physical manipulation from tangible user interfaces (TUIs). Specifically, 1) each virtual object is registered to a corresponding physical object, and 2) users interact with virtual objects by manipulating the physical objects. The paper presents some prototype Tangible AR interfaces and argues they support seamless interaction between real and virtual worlds through natural physical manipulations.
This document summarizes a technology called Sixth Sense, which allows users to perform gestures to interact with digital information rather than using keyboards or mice. It discusses using commands recognized by a speech integrated circuit instead of gestures to overcome limitations of gesture recognition. The speech IC is trained to recognize commands, which then trigger actions performed by a mobile device and projected for the user.
The document provides an overview of multi-touch technology. It discusses how multi-touch allows users to interact with devices using multiple fingers on a touchscreen. The technology has emerged in recent years in devices like phones, tablets, and monitors. The document traces the history of touch technology from early experiments in the 1970s to mainstream exposure through products from Microsoft and Apple in 2007. It examines insights that can be gained from analyzing patent data related to touch technologies.
Virtual Mouse Control Using Hand GesturesIRJET Journal
This document describes a system for controlling a computer mouse using hand gestures detected by a webcam. The system uses computer vision and image processing techniques to track hand movements and identify gestures. It analyzes video frames from the webcam to extract the hand contour and detect gestures. Specific gestures are mapped to mouse functions like movement, left/right clicks, and scrolling. The system aims to provide an intuitive, hands-free way to control the mouse for physically disabled people or those uncomfortable with touchpads. It could help the millions affected by carpal tunnel syndrome annually in India. The document outlines the system architecture, methodology including hand tracking and gesture recognition, and concludes the technology provides better human-computer interaction without requiring a physical mouse.
Engelman.2011.exploring interaction modes for image retrievalmrgazer
This document discusses exploring different interaction modes for image retrieval. It describes developing a framework that allows multimodal interaction using techniques like eye tracking, voice recognition, and multi-touch. An experiment was conducted to compare the usability of different interaction methods for query by example image retrieval. Nine participants used four methods - anchor, gaze, mouse, and touch - to select regions in images. Metrics like accuracy, precision and time were measured. Preliminary results showed touch interaction had the most consistent performance and shortest completion times.
1) Interactive machine learning (IML) allows users to train, classify, view, and correct classifications of images in an interactive fashion, unlike classical machine learning which is slow and not interactive.
2) IML enables feature selection to be done by the machine learning algorithm during training rather than requiring users to pre-select features.
3) The Crayons tool was created to implement IML using a simple painting metaphor, allowing users to quickly create image classifiers in minutes rather than weeks by focusing on classification rather than image processing details.
Modern computer-aided design (CAD) systems and software tools have played a significant role in improving the efficiency of the overall product design process, ensuring geometric accuracy and the exchange of product model data. However, the impact of these technologies is largely restricted to the detailed modeling and engineering analysis that occur during the embodiment design phase. Conceptual design has not benefited from these sophisticated and highly precise software tools to the same degree because the creative activities associated with developing and communicating potential solutions with minimal details is far less formulaic in its implementation. At the early stages of product design the specifications and constraints have not been fully established. The industrial designers and engineers need the freedom to change and modify the product configuration and mechanical behavior to investigate a wide range of alternative solutions. Any CAD system that seeks to support and enhance conceptual design must, therefore, enable natural and haptic modes of human-computer interaction. Recent advancements in high-speed, multi-core computer hardware and virtual reality (VR) technology provide opportunities to link the more fluid processes of creative conceptual design with the rigidly defined tasks of product detailing and engineering analysis. This paper discusses the role that virtual reality can play for concept design module.
The document summarizes 4 papers on novel user interfaces:
1. Fit Your Hand dynamically adjusts a mobile interface based on hand size and usage habits inferred through machine learning.
2. BLUI allows hands-free interaction on screens through localized sound detection of blowing.
3. WUW projects augmented reality through a wearable camera and allows gestural control through freehand gestures.
4. Light-tech interaction embeds low-power modules for ubiquitous interfaces, like an "emotional lamp" that responds to facial expressions.
The document introduces two tools for interacting with digital information in 3D space:
1. Beyond allows users to directly manipulate 3D digital objects using physically retractable tools that project into the digital space beyond the screen. This breaks down barriers between physical and digital worlds.
2. SpaceTop extends the desktop interface by adding a transparent display above the keyboard, allowing users to reach hands into this 3D digital workspace to directly grab and manipulate floating windows and files as if they were physical objects. This seamlessly integrates 2D and 3D interactions.
A Survey Paper on Controlling Computer using Hand GesturesIRJET Journal
This document summarizes a survey paper on controlling computers using hand gestures. It discusses various techniques that have been used for hand gesture recognition in previous research papers. The paper reviews literature on hand gesture recognition methods based on sensor technology and computer vision. It describes applications of hand gesture recognition such as controlling media playback, scrolling web pages, and presenting slides. Common challenges with hand gesture recognition are also mentioned, such as dealing with complex backgrounds and lighting conditions. The goal of the paper is to perform a literature review on prominent techniques, applications, and difficulties in controlling computers using hand gestures.
Design and Evaluation Case Study: Evaluating The Kinect Device In The Task of...Waqas Tariq
This document describes a study that evaluated the Microsoft Kinect device for natural interaction in an information visualization system called MetricSPlat. The researchers hypothesized that Kinect would enable more efficient interaction than a mouse for tasks like identifying clusters and outliers in multidimensional data projections. They used a participatory design process with users to develop an interaction scheme for controlling MetricSPlat with Kinect gestures. Usability tests were conducted during design to evaluate each iteration. After finalizing the Kinect scheme, comparative usability tests were performed between Kinect and mouse. The results found that while users reported high satisfaction with Kinect, it was less efficient than the mouse in terms of task completion times and precision for the specific visualization tasks in the
Design and Evaluation Case Study: Evaluating The Kinect Device In The Task of...Waqas Tariq
We verify the hypothesis that Microsoft’s Kinect device is tailored for defining more efficient interaction compared to the commodity mouse device in the context of information visualization. For this goal, we used Kinect during interaction design and evaluation considering an application on information visualization (over agrometeorological, cars, and flowers datasets). The devices were tested over a visualization technique based on clouds of points (multidimensional projection) that can be manipulated by rotation, scaling, and translation. The design was carried according to technique Participatory Design (ISO 13407) and the evaluation answered to a vast set of Usability Tests. In the tests, the users reported high satisfaction scores (easiness and preference) but, also, they signed out with low efficiency scores (time and precision). In the specific context of a multidimensional-projection visualization, our conclusion is that, in respect to user acceptance, Kinect is a device adequate for natural interaction; but, for desktop-based production, it still cannot compete with the traditional long-term mouse design.
Design and Evaluation Case Study: Evaluating The Kinect Device In The Task of...Waqas Tariq
We verify the hypothesis that Microsoft’s Kinect device is tailored for defining more efficient interaction compared to the commodity mouse device in the context of information visualization. For this goal, we used Kinect during interaction design and evaluation considering an application on information visualization (over agrometeorological, cars, and flowers datasets). The devices were tested over a visualization technique based on clouds of points (multidimensional projection) that can be manipulated by rotation, scaling, and translation. The design was carried according to technique Participatory Design (ISO 13407) and the evaluation answered to a vast set of Usability Tests. In the tests, the users reported high satisfaction scores (easiness and preference) but, also, they signed out with low efficiency scores (time and precision). In the specific context of a multidimensional-projection visualization, our conclusion is that, in respect to user acceptance, Kinect is a device adequate for natural interaction; but, for desktop-based production, it still cannot compete with the traditional long-term mouse design.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
1. Wiimote and Kinect: Gestural User Interfaces add a
Natural third dimension to HCI.
∗
Rita Francese Ignazio Passero Genoveffa Tortora
University of Salerno University of Salerno University of Salerno
via Ponte Don Melillo, 1 via Ponte Don Melillo, 1 via Ponte Don Melillo, 1
Fisciano (SA), Italy Fisciano (SA), Italy Fisciano (SA), Italy
francese@unisa.it ipassero@unisa.it tortora@unisa.it
ABSTRACT nect, Wiimote, Human Computer Interaction, Empirical Eva-
The recent diffusion of advanced controllers, initially de- luation.
signed for the home game console, has been rapidly followed
by the release of proprietary or third part PC drivers and 1. INTRODUCTION
SDKs suitable for implementing new forms of 3D user in-
On the first day of April 2011, Google was announcing
terfaces based on gestures. Exploiting the devices currently
the revolutionary Gmail Motion Beta application. Thanks
available on the game market, it is now possible to enrich,
to standard webcams and Google’s patented spatial track-
with low cost motion capture, the user interaction with desk-
ing technology, Gmail Motion claimed it would detect user
top computers by building new forms of natural interfaces
movements and translate them into meaningful characters
and new action metaphors that add the third dimension as
and commands. Despite the April Fool character of this an-
well as a physical extension to interaction with users. This
nounce (one of the authors need to confess that, forgiving
paper presents two systems specifically designed for 3D ges-
the particular day, he tried to experiment the announced
tural user interaction on 3D geographical maps. The pro-
service), on the supporting website [4] there are sentences
posed applications rely on two consumer technologies both
referred to gestural interfaces that sound indubitably inter-
capable of motion tracking: the Nintendo Wii and the Mi-
esting: “Easy to learn: Simple and intuitive gestures”, “Im-
crosoft Kinect devices. The work also evaluates, in terms of
proved productivity: In and out of your email up to 12%
subjective usability and perceived sense of Presence and Im-
faster” as well as “Increased physical activity: Get out of
mersion, the effects on users of the two different controllers
that chair and start moving today”.
and of the 3D navigation metaphors adopted. Results are
Because of its main typewriting nature, the mailing activ-
really encouraging and reveal that, users feel deeply immerse
ity, is not the best target for benefiting of a gestural inter-
in the 3D dynamic experience, the gestural interfaces quickly
face, but the technology is now mature and offers new con-
bring the interaction from novice to expert style and enrich
sumer hardware that can easily support applications based
the synthetic nature of the explored environment exploiting
on natural human computer interaction. Traditional GUIs
user physicality.
adopt mouse and keyboard building the interaction with the
user on artificial elements like windows, menus or buttons.
Categories and Subject Descriptors Natural user interfaces disappear behind the content, and
H.5.2 [Information Interfaces and Presentation]: User direct manipulation style (e.g., touch, voice commands and
Interfaces; B.4.2 [Input/Output and Data Communi- gestures) is the primary interaction method adopted [2, 10].
cation]: Input/Output Devices. Despite that, too often the window/icons/mouse metaphors
contaminate the gestural interfaces vanishing their efficacy
[28], the user is involved in a frustrating experience: the
General Terms motion capture based interface fails in being effective since
Design, Experimentation, Human Factors. it’s used only for mimicking the classic mouse interaction.
Indeed, differently from artifacts (e.g., documents, pictures,
Keywords videos, etc. ), the cursor arrow is not a good target for
direct manipulation interfaces. Taking a look at game mar-
3D Interfaces, Natural User Interfaces, Motion Capture, Ki-
ket, the game consoles capable of motion capture (almost
∗Corresponding author. all now) limit the window/icon interaction only to the basic
operations (e.g., game menus and console administration),
and offer to the players gaming experiences based on the
analogies between control gestures and the real ones.
Permission to make digital or hard copies of all or part of this work for In this paper, we describe two applications that adopt ges-
personal or classroom use is granted without fee provided that copies are tural interaction for controlling user navigation of Bing maps
not made or distributed for profit or commercial advantage and that copies [11]. The two applications, Wing (Wiimote Bing) and King
bear this notice and the full citation on the first page. To copy otherwise, to (Kinect Bing), represent the occasion for experimenting,
republish, to post on servers or to redistribute to lists, requires prior specific in the context of 3D environments, two natural interfaces
permission and/or a fee.
AVI ’12, May 21-25, 2012, Capri Island, Italy based on two consumer controllers: the Nintendo Wii Re-
Copyright 2012 ACM 978-1-4503-1287-5/12/05 ...$10.00. mote (also known as Wiimote) [19] and the Microsoft Kinect
116
2. Figure 1: The Wiimote and Nunchuk sensors con-
figuration adopted for the Wing gestural interface.
Figure 2: Wing: the Wiimote and Nunchuk motion
[13]. The novelty of the interaction proposed is in the con-
controllers during navigation.
trolling metaphors that completely abandon the point and
click interaction of the Bing classic PC navigation for two
natural interfaces with the users based on gestures. Aiming
ture device simply based on an Infra Red emitter and two
at evaluating how the degree of body involvement affects
video cameras. However, thanks to motion detection, all
the user perceptions about the experience, the two applica-
these controllers let the gaming experience to be realistically
tions have been empirically evaluated via the Usability Sat-
based on gestures analogous to the mimicked ones. While
isfaction Questionnaires [9] and via the well know Witmer
Wiimote and Kinect are imported from game console world
and Singer’s one [30], specific for assessing perceived sense
to the computer one, Asus and PrimeSense were propos-
of Presence and Immersion in a virtual environment. Re-
ing Wavi Xtion, their low cost motion capture alternative,
sults confirmed the enthusiastic impressions we previously
specifically designed for PCs and smart TVs [1]. Exploiting
had observing that the users were quickly feeling comfort-
the availability of a simple connection with normal PCs and
able with the interfaces and were pleasantly interacting with
the diffusion of official or unofficial SDKs [26, 16, 21] for
both systems.
developing desktop applications controlled by these devices,
the Research is exploring new interaction instruments and
2. BACKGROUND modalities as well as new natural interfaces for the more dif-
In the past years, the game console market has been a re- ferent applications in several disciplines, ranging from teach-
ally competitive sector that was exploiting and often driving ing to medicine.
the development of state of the art processing and graphical
technologies to compete on a really exigent customer popu- 2.1 Wiimote and Applications
lation. Recently, the market trend has changed and, follow- Wii Remote (often shorten as Wiimote) [19] was intro-
ing the evolving customer preferences, has been mainly fo- duced in 2006 by Nintendo and promoted the success of the
cused on realistic gestural human-computer interfaces than Wii console. The Wiimote communicates over a wireless
on computing performance or on graphical capabilities of Bluetooth connection, offers a set of classic joypad buttons
the proposed products [23]. With Wii TM console, Nintendo and senses acceleration along three axis. Wiimote is also
(2006) has proposed a game platform not particularly ex- equipped with an optical sensor that, associated to an Infra
citing in terms of performance but it broke several records Red (IR) source (i.e., the Wii sensor bar), allows to deter-
as best sold console [32]. Reasons for this success are the mine where the device is pointing. Wiimote can be comple-
revolutionary characteristics of Wii control system, the Wi- mented with Motion Plus that adds a gyroscope to improve
imote [8] (shown in Figure 1), its high expandability with the detection of complex movements [29]. The device is
several accessories and the possibility of offering to users equipped with 5.5 Kilobytes of memory (almost adopted for
experiential games improved with active gestures and re- user customisations) and adopts as feedback mechanisms a
ally effective playing metaphors. The success of Nintendo speaker, a vibration motor and four light emitting leds [8].
Wii clearly states the influence of the associated novel ges- Nunchuk is an extension that plugs into Wiimote via a con-
tural interfaces on user satisfaction. Following Nintendo, nection cable and adds 2 buttons, an analog joistick as well
Sony and Microsoft, the other two competitors in the game as an independent three axis accelerometer (usually associ-
console sector, were proposing their motion sensing game ated with the secondary hand of the user). The adoption
controllers for answering to the user need of playing in a of motion sensing game controllers on desktop computers
natural manner. Their answers to the market demand have enables to implement novel interfaces capable of deeply in-
been the PlayStation MoveTM (2008) and, only in November volving users in realistic experiences. In [28], the authors,
2010, the KinectTM controller. While the Wiimote and PS considering touch based interfaces, claim natural user inter-
Move offer two similar controlling experience to the users, faces to be characterised mainly by a high learnability. In
limited by the need of holding the controllers with hands, our case, the users quickly and spontaneously move from
Kinect represents the first consumer full body motion cap- what we consider a basic navigation style to an expert one,
117
3. but, as shown in the Evaluation Section, we are also inter-
ested in the impact of user perceptions and involvement on
the proposed 3D navigation, as in the case of the kinesthetic
learning experience proposed in [5]. In this paper, Ho-Shing
Ip et al. propose a didactic experience exploiting the inter-
play between body, mind and emotions for amplifying the
learning value and a model for investigating the effects of im-
mersive body movement interaction with virtual characters
and scenarios. In particular, they adopt the Wiimote and
the Nunchuk extension for controlling the flight of a bird in
a Hummingbird Flying Scenario. As [5], we also exploit the
amplifying effects on usability and user involvement due to
natural interaction style interfaces and their physical nature
for giving physicality to the synthetic environments adopted.
De Paolis et al. propose, in [3], a serious game based on the
philological reconstruction of city life in the XIII century.
As input peripheral, the authors adopt a Wiimote controller
with the Balance Board extension, that adds four pressure
sensors to the control system and is used for detecting walk- Figure 3: King: the Kinect device controls Bing
ing gestures. Yang and Lee, in [32], propose the adoption maps navigation.
of Wiimote as a wireless presentation controller and a wire-
less mouse. They adopt the IR sensor for tracking the Wi-
imote pointing direction of up to four users. Santos et al., in and on gesture recognition [7]. The motion capture is per-
[22], perform a user study aiming at comparing two differ- formed analysing the raw depth images provided by a Kinect
ent Wiimote configurations with the classic desktop mouse sensor.
in controlling Google Earth navigation. Both the proposed Phan adopts the OpenNI toolkit and develops a Kinect
Wiimote configurations mimic with two buttons the mouse: client for controlling Second Life gestures aiming at estab-
one detects user movements via accelerometer, the other via lishing a direct channel between the user body and his/her
the IR sensor. The study reveals that Wiimote presents sev- avatar [24]. Also in this case, the aim is improving the vir-
eral advantages over desktop and mouse. Differently from tual environment experience and the perceived immersion by
[22], we adopt and evaluate two applications based on Wi- letting the user interface to disappear behind real gestures.
imote and Kinect controllers and propose two natural in- Boulos et al. base on Kinect their application, Kinoogle
terfaces explicitly designed for 3D navigation and really far [6], and develop a gestural interface for controlling Googole
from the classic desktop metaphors. Earth navigation. The proposed gestures are mainly based
on hand tracking and resemble the classic multitouch inter-
2.2 Kinect and Applications action style. Differently from them, we propose a navigation
control that is inspired to natural flight gestures and user ac-
With Kinect, Microsoft distributes, as a controller for
tions reflect on the map navigation according to metaphoric
Xbox system, the first motion capture device on the con-
similarity.
sumer market. The device is available from November 2010,
the first unofficial SDK is dated December 2010 [26, 27],
while the first official SDK for PC users was released by 3. WING AND KING APPLICATIONS
Microsoft on June 2011 in beta version and is free for non Wing and King are the two controller applications devel-
commercial uses [16]. An estimation of Kinect marketing oped on Wiimote with Brian Peek’s SDK [21] and on Kinect
success can be done if we consider that within the first 25 with the official SDK [16]. Both applications control a Bing
days, Microsoft sold 2.5 millions Kinect devices [23]. map client [11] and react to user gestures inspired by well
Kinect sensor embeds a four-element linear microphone accepted metaphors.
array capable of sophisticated acoustic echo cancellation, It is important to point out that Bing maps represent just
noise suppression and direction localisation as well as an an instance of 3D navigable environments and provide us the
IR emitter and two cameras that deliver depth information, dimensions for experimenting our natural interfaces. During
colour images and skeleton tracking data. The natural user the evaluation phases we noticed, for both Wing and King
interface API, in the Kinect for Windows SDK, enables ap- systems, that the users quickly and spontaneously moved
plications to access and manipulate the data collected by from novice use, characterised by single navigating com-
the sensor [16]. The optimal working distance ranges from mands at a time, to a sort of expert use, when they started
0.8 to 4 meters. In this range, the depth and skeleton views to combine turning with altitude and movement commands,
detect users only if the entire body fits within the captured generating a more complicate navigation path. A video of
frame but the device pointing direction can be adjusted by the applications is available at [20].
a motorised tilting mechanism. For overcoming the working
distance restrictions still maintaining a good screen readabil- 3.1 Wing and the Wiimote Controller
ity, we were visualising the King client via a room projector Wing is a Bing map navigator controlled by the accelerom-
(we were doing the same for Wing, aiming at avoiding screen eters of a Nintendo Wiimote and a Nunchuk [8]. The ap-
differences to bias the proposed evaluation). plication is developed in C# [15] and connects to both con-
In the context of 3D models navigation, Lacolina et al. trollers via bluetooth using the Wiimote lib [21]. The con-
adopt natural interfaces based both on multitouch tables trollers and the dimensions adopted for building the Wing
118
4. natural interface are shown in Figure 1. In the image, the
movements detected as controls are depicted on the Wiimote
(right side of the image) and on the Nunchuk (left side) de-
vices. Wing proposes to users two interaction metaphors
well diffused and accepted in videogame sector. The main
controller acts on forward/backwards movements when ro-
tated along its longer dimension (i.e., Roll). Its inclination
(i.e., Pitch) determines if the navigation turns. Both the
gestures are inspired to the motorcycle metaphor: while the
Wiimote acts with its Roll rotation as a motorcycle throt-
tle command connected to navigation forward and backward
movements, the turning gestures resemble the handlebar of
the imaginary motorcycle during turning. The forward and
backward movements can be requested at different speeds
according to the Wiimote rotating angle. The aeroplane
cloche metaphor is implemented on the Nunchuk and con-
trols altitude: its tilting direction determines vertical varia-
tions of the map navigation.
Figure 2 shows the Wing application and how users grasp Figure 4: The King Control gestures.
the controls during the navigation. The Wing user holds
both controllers with forearms aligned with the elbows. The
turning gestures are detected when the user turns the Wi-
imote: the credibility of the handlebar metaphor is deeply
perceived and, during navigation, we observed a big part
of users keeping Wiimote and Nunchuk aligned even if the
two components are independent. The altitude gestures are
activated by Nunchuk when the user rotates the wrist up-
wards (increases alt.) and backwards (decreases alt.) or
when he/she accordingly bends the forearm on the elbow. In
both cases the gesture well reflects the videogame action of
pitching up/down an aeroplane cloche. Altitude, movement
and turning commands can be combined obtaining complex
flight/navigation behaviours. All the proposed gestures are
becoming more and more popular among gamers [17, 18, 19] Figure 5: The results of ASQ questionnaires.
and are ready to be extended to PC users.
3.2 King and the Kinect controller detected when the user moves one (slow motion) or both
hands (fast motion) ahead of the elbows. This corresponds
King is a Bing map navigator based on Microsoft Kinect to the skeleton depicted in (b) (arms bent) but is also de-
controller. The application has been developed in C# and tected when the user extends forward his arms.
associates to the Bing map a simple window showing a paper The bird/plane metaphor does not contemplate a backward
aeroplane on a sky background. The application controls the movement and we do not violate this assumption providing
Kinect sensors via the official SDK [16]. a surrogate gesture. Figure 4 (c) and (d) depict the turn ges-
Figure 3 shows the King application and the Bing map tures that were previously described. Altitude of navigation
client during the navigation. The paper aeroplane reflects is controlled by gestures (e) and (f). The idea is in exposing
the gesture performed by the user and is the only feedback to King users two gestures that could be easily associated
mechanism (useful, if we consider that respect to Wing, the to rise or going down effects, avoiding the need of dynam-
King interface is completely hand-free). King proposes to ically mimicking the bird flight gesture that is quite tiring.
its users the bird (or aeroplane) metaphor and customises For rising or decreasing altitude, King users are required to
on it the gestures associated to the various commands. Fig- start and continue the static gesture ((e) or (f)) until the
ure 3 shows a user performing a left turn: she inclines the desired observation height is achieved. Figure 4 depicts also
aligned arms downward on the left as a bird or an aero- the states of the feedback paper aeroplane accorded to the
plane would have done and while the flight on the Bing map gesture detected. The movement, turning and altitude ges-
turns, accordingly, the paper plane in the feedback window tures proposed as King natural interface can be combined
performs a similar rotation. The idea is to mimic the bird’s obtaining the desired navigation experience.
wing movements, when possible, with the arm gestures. At
the moment, the game market still does not offer examples
of similar gestures but generic natural interfaces for sport, 4. EVALUATION
fighting or dancing games are already available [14, 25, 12]. The Wing and King applications have been evaluated in
Figure 4 shows the controlling gestures used for King map an laboratory session organised according to the suggestions
navigator. The neutral position for the navigation is de- provided by Wohlin et al. in [31] aiming at assessing per-
picted in sub-picture (a): when the user stands with open ceived usability and sense of Presence in the Virtual En-
and aligned arms, the navigation halts. The gesture asso- vironment [30]. Participants of the study have been 24 (8
ciated to forward moving is depicted in sub-plot (b) and is girls) undergraduate students and employees of our Faculty
119
5. who volunteered taking part to the experiment. The stu-
dents population we selected was chosen from a program Table 1: Witmer and Singer questions
Were you involved in the experimental task
that does not require or provide particular competences in 1 INV
to the extent that you lost track of time?
3D virtual environments, games and natural user interfaces. How involved were you in the virtual envi-
Their ages ranged between 18 and 41 years old with an av- 2 INV
ronment experience?
erage of 24. Before starting the experiment, we assessed How well could you concentrate on the as-
participant skills in the videogames and natural interfaces signed tasks or required activities rather
3 DF
sectors. In our sample, 8 participants indicated to be playing than on the mechanisms used to perform
digital games at least once a week, three were Nintendo Wii those tasks or activities?
players and just two of them were using Xbox and Kinect. How much did the control devices interfere
4 with the performance of assigned tasks or DF
4.1 Subjective Usability with other activities?
How responsive was the environment to ac-
Subjective usability has been evaluated via the After-Scena- 5 CF
tion that you initiated (or performed)?
rio (ASQ) and the Computer System Usability (CSUQ) ques- How natural was the mechanism which
tionnaires that, as shown by Lewis, provide strong evidence 6 controlled movement through the environ- CF
of generalizability of results and of wide applicability. The ment?
questions have been evaluated on the seven-point Likert 7 How natural did your interactions with the CF
scale anchored from 1 (strongly disagree) to 7 (strongly environment seem?
How proficient in moving and interacting
agree).
8 with the virtual environment did you feel CF
The ASQ is a three-item questionnaire that is used to
at the end of the experience?
assess participant satisfaction after the completion of each Were you able to anticipate what would
task and evaluates the time to complete the task, the ease 9 happen in response to the actions that you CF
of completion and the adequacy of support information. performed?
The CSUQ questionnaire is made by 19 questions assess- How quickly did you adjust to the virtual
ing user satisfaction with system usability and can be aggre- 10 CF
environment experience?
gated in four factors: 11 How compelling was your sense of moving CF
around inside the virtual environment?
• Overall Evaluation (OVR), How much did your experiences in the
12 virtual environment seem consistent with CF
• System Usefulness (USE), your real-world experiences?
• Information Quality (INFO),
• Interface Quality (INTERF); our empirical evaluation from the Witmer and Singer ques-
tionnaire. The questions are reported in Table 1 aggregated
More details on the questionnaires and the questions are under three factors:
available in [9].
• Involvement (INV),
4.2 Presence and Immersion
• Distracion Factor (DF),
In this work we adopt Bing maps as a 3D virtual environ-
ment in which experimenting the Wing and King interfaces. • Control Factor (CF).
3D environments have a significant advantage over settings
based on 2D technology since they induce a strong Presence Also the answers to this questionnaire have been formulated
sensation in their users [30]. During Bing map navigation, on the seven-point Likert scale: from 1 (strongly disagree)
users move in a virtual space generated by the computer, to 7 (strongly agree).
react to actions and change their point of view on the scene
with movement. Witmer and Singer define Presence as “the 4.3 Experiment Design
subjective experience of being in one place or environment, In the proposed usability study, participants tried in quick
even when one is physically situated in another” and “...pres- succession our gestural interfaces engaging in two naviga-
ence refers to experiencing the computer-generated environ- tion tasks. After being singularly instructed on the Wing
ment rather than the actual physical locale”. As stated by and King systems, the users were required to complete the
them, several factors contribute to increase presence: Con- navigation of two geographical paths involving well known
trol, Realism, Distraction and Sensory input. Presence is Italian cities:
maximised when the user interacts with the environment in
a natural manner, controls the events, when he sees the sys- • SEA:Cagliari-Napoli-Palermo
tem behaving as expected and the 3D environment changing • LAND:Genova-Roma-Venezia
accordingly to his commands. The minimisation of distrac-
tions that can occur when a user has problems in controlling Both tasks are comparable in terms of distances and diffi-
the navigation, as an example, can increase the perceived culties in localising the target cities. However, aiming at
immersion in the experience and the virtual environment. avoiding to bias the evaluation with task or tested appli-
As suggested in [5], we hypothesised that the physical di- cation orders, we adopted for the experiment a balanced
mension of the proposed interfaces may influence user sense paired design as suggested in [31]: we divided our users in
of immersion in the proposed navigation experience. Aim- two groups: each member of the same group was starting
ing at assessing the degree of Presence perceived by users the experiment with the same system. Among each group,
during the tasks, we extracted 12 questions appropriate for half of the participants was starting with the SEA task and
120
6. Figure 6: The results of CSUQ questionnaires. Figure 7: The results of Presence questionnaire.
Table 2: CSUQ categories details Table 3: Witmer and Singer categories details
OVR USE INFO INTERF INV DF CF
µ σ µ σ µ σ µ σ µ σ µ σ µ σ
Wing 5.13 1.08 4.85 1.21 5.17 0.94 5.75 1.33 Wing 5.39 0.83 4.41 1.14 5.87 0.27
King 5.78 0.75 5.89 0.78 5.4 0.82 6.25 0.85 King 5.89 0.69 5.39 0.94 6.14 0.26
the other with the LAND one. After each task, all partici- servation of users during the study suggests the same conclu-
pants filled the ASQ, the CSUQ questionnaire and answered sion: participants have been almost all disappointed at the
the questions from Witmer and Singer questionnaire [30] re- end of their King task showing that they would prefer con-
ported in Table 1. Let us point out that question 4 results tinuing the experience based on bird/aeroplane metaphor.
has been reversed before aggregating the DF category. Once assessed the degree of Usability and perceived System
Usefulness for both Wing and King systems, we extended
4.4 Results the evaluation to user perceptions in terms of Presence and
The first good impressions on the usability of the proposed Involvement in the virtual experience. At this aim, we ex-
systems were collected during the experiment by listening tracted 12 questions from Witmer and Singer questionnaire
at participant comments and observing their behaviours. to integrate ASQ and CSUQ ones. As shown in Table 1, the
These insights were lately confirmed by examining the ques- Presence questionnaire is aggregated in three factors aiming
tionnaires answers. Figure 5 shows the boxplots depicting at assessing how users experience the computer-generated
the ASQ results that gave a preliminary idea about task environment rather than their physical locale. The adoption
difficulties, user preferences and perceived usability of the of a controller based natural interface (Wing) and a hand-
proposed interfaces. The users globally assigned really high free natural one (King) let us understand if, in the context
scores to both systems but the effects of the King feedback of 3D map navigation, physical gestures (and the sensor de-
mechanism (the paper aeroplane) brought higher the INFO vice nature) increase experience likability and involvement
score associated to King respect to its competitor. In the deepness. Figure 7 shows the Presence questionnaire results
same direction the EASE boxes confirm the tasks performed aggregated in boxplots respect to the previously described
via the Kinect interface to have been perceived as easier re- factors. The measures of central tendency and spread for the
spect to Wing ones and this categories show the bigger dif- Presence categories are reported in Table 3. Also in the case
ference among the two systems. Also the TIME categories of Presence and Involvement, Wing opinions show a higher
state that the two tasks have been perceived in the same variability respect to King ones and, accordingly, user im-
manner respect to the time assigned, but the results are al- pressions are more concordant for King system respect to
ways characterised by a little preference for King system. Wing. What is really remarkable is that, despite Wing sys-
Figure 6 reports the results of CSUQ aggregated in the four tem had a good success in user evaluations, higher result
categories suggested by Lewis. The first observation on data values were obtained by the King experience. As an exam-
is about dispersion: comparing the two boxplots in Figure ple, Involvement factor was evaluated µ=5.39 for Wing while
6, it is evident that users perceptions are characterised by a King was performing better (µ=5.89). Both systems have
higher variability for Wing scores respect to King ones. The been positively judged in term of Control Factor, as shown
latter system was also perceived as better respect to all the in the rightmost boxes of both subplots of Figure 7: Table 3
four aggregating factors but exhibits higher differences re- details King to obtain a µ=6.14 while Wing scores µ=5.87.
spect to OVR (Overall Usability) and USE (System Useful- The Wing interface (µ=4.41), respect to the category Dis-
ness) categories (as shown in Figure 6 and detailed in Table traction Factor, has been perceived a little less effective than
2): the better performance of King is mainly concentrated the King one (µ=5.39). This has been probably due to the
in its System Usefulness and influences the overall opinion hand-free interface that was built tanks to the adoption of
about the systems. Table 2 resumes, via the µ and σ values, Kinect sensor: while the Wing user holds the wiimote, the
the results of CSUQ categories. All the considerations de- Microsoft device has proved to be really effective in letting
ducted on Figure 6, on boxes (and consequently on medians) the interface to disappear behind natural gestures.
reflect, obviously, on the values reported in Table 2: King Respect to the sense of Presence and Involvement in the
encounters more user enthusiasm than Wing. A direct ob- Bing Virtual Environment it can be interesting to further de-
121
7. went in the same direction of previous answers but, in this
case, the differences between the two systems are less evi-
dent: King natural interaction with Bing environment was
however better perceived than Wing’s one. Resuming, the
evaluation provided really good user impressions: Wing and
King systems were judged usable and the user were satisfied
with both. The proposed experience has also shown that
the more the interface is natural (in the sense that it disap-
pears behind the gesture) the more the users are involved
in the virtual environment and hosted activities. However,
we are conscious that Kinect novelty may have influenced
participants, but we also trust that the physicality of the
natural interfaces proposed and the immediate learnability
of the metaphors are the main motivations that positively
influenced the testers and their opinions (see also the high
scores obtained by Wing and its interface).
Obviously, results presented in this work are strictly re-
lated to the context adopted for the evaluation and are lim-
ited to 3D navigation of virtual environments. The same
interfaces may not be suitable for other applicative domains
and the user reactions and opinions may be different. The
Kinect approach, indeed, even if easy to learn, is not appro-
priate for a prolonged usage because of the physical effort
required to users. This effort is however vital for giving a
physical dimension to the navigating experience of 3D en-
vironments: the involvement results are very positive and
the physical perception of the experience, amplified by the
surrounding and physical nature of the controlling interface,
is very relevant. As a consequence, the Kinect approach
Figure 8: The results of Witmer and Singer ques-
(but also the wiimote one) can be very appealing for kid
tions.
education. Indeed, when teaching geography, this kind of
experience helps to deeply involve kids and soliciting their
spatial perception of the explored environment. As a future
tail the answers to some of the questions reported in Table 1,
work, it will be interesting experimenting Wing and King
that let better understand the effects of the proposed inter-
approaches in a primary didactic setting and understand-
faces. Figure 8 aggregates in the two subpictures (labelled
ing the educational effects of the proposed, game derived,
Wing and King) the histograms for all the twelve questions
controlling metaphors.
extracted from the Witmer and Singer questionnaire.
Question 2 is directly formulated for assessing the degree
of user involvement in the Virtual Environment experience. 5. CONCLUSION
As shown in Q2 histograms of Figure 8, Wing received good The recent diffusion of consumer game controllers that
scores from almost all participants but King was the better offer motion tracking functionality and the release of con-
performing application and concentrated the great part of nection drivers and SDKs and represent wonderful oppor-
user votes around 6 and 7, directly stating that the phys- tunities for implementing and experimenting new forms of
ical interface proposed induces a strong and deep sense of 3D user interfaces based on gestures. In this paper, Wing
involvement in its users. and King applications, as well as the associated navigation
Question 3 focuses on the intrusiveness of the interface that metaphors, have been presented and evaluated in terms of
ideally should disappear respect to user actions. King in- subjective usability and perceived sense of Presence and
terface appears to be the more transparent to the users and Immersion. The proposed applications adopt user motion
almost all participants scored it above 4 while the Wing tracking via the Nintendo Wii and the Microsoft Kinect de-
experience induced in its users more various opinions (dif- vices. The two Bing map navigators proposed just represent
ferently from King, also negative ones). This is confirmed the occasion for experimenting 3D gestural interfaces and
by the answers to Question 6 and 7 specifically evaluating their usability as well as assessing the effects on user sense
how natural the proposed gestural interfaces have been per- of Presence and immersion in a synthetic 3D environment.
ceived. In particular, question 6 aims at evaluating the in- Results of the evaluation performed via standard ques-
terface while the 7th one is focused on the interaction with tionnaires are really encouraging and, also considering the
the environment. Despite Wing has obtained high values for natural satisfaction boost related to the novelty of Kinect
question 6 (Figure 8 shows the result bell centred between 5 controller, suggest that the more the interface is natural
and 6), King shows all the benefits related to its hand-free and involves their body in the action, the more the user
gestural interface with 21 users voting it more than 5. Also are satisfied and involved in the 3D maps navigation ex-
in this case, it is important to point out that, with better perience. Important success usability factors found are the
user opinions, King evaluators also agreed with a minor dis- ease of use of the King system and the deep involvement
persion on their preferences. By examining the answers to his gestural interface induces in users. Lesson learnt with
question 7, it is possible to notice that the user preferences this experience suggests to avoid, when possible, the clas-
122
8. sic window/icon/mouse interaction for experimenting, ob- [15] Microsoft. Visual c#. retrieved on December 2011
viously for appropriate tasks (i.e., not keyboard intensive, from http://msdn.microsoft.com/en-
etc.), new gestures and new forms of physical commands. us/library/kx37x362.aspx.
As a future work, we intend to experiment the interfaces in [16] Microsoft. Kinect sdk for developers. retrieved on
a kid geographical didactic context that will probably bene- December 2011 from http://kinectforwindows.org/,
fit of the effects of the proposed experiences and interfaces. 2011.
[17] Nintendo. Donkey kong jet race. retrieved on
6. ACKNOWLEDGMENTS December 2011 from http://tinyu.me/VdTB7.
We would like to thank the little Giuseppe for his natu- [18] Nintendo. Mario kart wii. retrieved on December 2011
ral attitude to game and his children’s insatiable desire to from http://www.mariokart.com/wii/launch/.
explore. [19] Nintendo. Wii remote. retrieved on December 2011
from http://www.nintendo.com/wii/what-is-
7. REFERENCES wii/#/controls.
[1] ASUS. Wavi xtion. retrieved on December 2011 from [20] I. Passero. Two bing maps controllers based on kinect
http://event.asus.com/wavi. and wiimote devices. retrieved on December 2011 from
[2] J. Blake. Natural User Interfaces in .NET (Early http://youtu.be/ITtd02h5G5w.
Access Edition). Manning Publications Co., [21] B. Peek. Wiimote lib: Managed library for nintendo’s
Greenwich, CT, 2010. wiimote. retrieved on December 2011 from
[3] L. De Paolis, G. Aloisio, M. Celentano, L. Oliva, and http://wiimotelib.codeplex.com/.
P. Vecchio. Mediaevo project: A serious game for the [22] B. Santos, B. Prada, H. Ribeiro, P. Dias, S. Silva, and
edutainment. In Computer Research and Development C. Ferreira. Wiimote as an input device in google
(ICCRD), 2011 3rd International Conference on, earth visualization and navigation: A user study
volume 4, pages 524 –529, march 2011. comparing two alternatives. In Information
[4] Google. Gmail motion beta. retrieved on December Visualisation (IV), 2010 14th International
2011 from Conference, pages 473 –478, july 2010.
http://mail.google.com/mail/help/motion.html. [23] K. Sung. Recent videogame console technologies.
[5] H. Ip, J. Byrne, S. Cheng, and R. Kwok. The samal Computer, 44(2):91 –93, feb. 2011.
model for affective learning: A multidimensional [24] P. Thai. Using kinect and openni to embody an avatar
model incorporating the body, mind and emotion in in second life: Gesture & emotion transference.
learning. In DMS 2011 : The 17th International retrieved on December 2011 from
Conference on Distributed Multimedia Systems, pages http://tinyu.me/o2GzQ.
1–6, august 2011. [25] UBISOFT. Fighters uncaged. retrieved on December
[6] M. Kamel Boulos, B. Blanchard, C. Walker, 2011 from http://fighters-uncaged.uk.ubi.com/.
J. Montero, A. Tripathy, and R. Gutierrez-Osuna. [26] Various. Openkinect. retrieved on December 2011
Web gis in practice x: a microsoft kinect natural user from http://openkinect.org.
interface for google earth navigation. International [27] N. Villaroman, D. Rowe, and B. Swan. Teaching
Journal of Health Geographics, 10(1):45, 2011. natural user interaction using openni and the
[7] S. A. Lacolina, A. Soro, and R. Scateni. Natural microsoft kinect sensor. In Proceedings of the 2011
exploration of 3d models. In Proceedings of the 9th conference on Information technology education,
ACM SIGCHI Italian Chapter International SIGITE ’11, pages 227–232, New York, NY, USA,
Conference on Computer-Human Interaction: Facing 2011. ACM.
Complexity, CHItaly, pages 118–121, New York, NY, [28] D. Wigdor and D. Wixon. Brave NUI World:
USA, 2011. ACM. Designing Natural User Interfaces for Touch and
[8] J. Lee. Hacking the nintendo wii remote. Pervasive Gesture. Morgan Kaufmann, 1 edition, Apr. 2011.
Computing, IEEE, 7(3):39 –45, july-sept. 2008. [29] C. A. Wingrave, B. Williamson, P. D. Varcholik,
[9] J. Lewis. Ibm computer usability satisfaction J. Rose, A. Miller, E. Charbonneau, J. Bott, and J. J.
questionnaires: psychometric evaluation and LaViola Jr. The wiimote and beyond: Spatially
instructions for use. International Journal of convenient devices for 3d user interfaces. Computer
Human-Computer Interaction, 7(1):57–78, 1995. Graphics and Applications, IEEE, 30(2):71 –85,
[10] W. Liu. Natural user interface- next mainstream march-april 2010.
product user interface. In Computer-Aided Industrial [30] B. Witmer and M. Singer. Measuring presence in
Design Conceptual Design (CAIDCD), 2010 IEEE virtual environments: A presence questionnaire.
11th International Conference on, volume 1, pages 203 Presence, 7(3):225–240, 1998.
–205, nov. 2010. [31] C. Wohlin, P. Runeson, M. H¨st, M. C. Ohlsson,
o
[11] Microsoft. Bing maps beta. retrieved on December B. Regnell, and A. Wessl´n. Experimentation in
e
2011 from http://www.bing.com/maps/. software engineering: an introduction. Kluwer
[12] Microsoft. Dance central 2. retrieved on December Academic Publishers, Norwell, MA, USA, 2000.
2011 from http://tinyu.me/vO2eN. [32] Y. Yang and L. Li. Turn a nintendo wiimote into a
[13] Microsoft. Kinect. retrieved on December 2011 from handheld computer mouse. Potentials, IEEE, 30(1):12
http://www.xbox.com/en-US/Kinect. –16, jan.-feb. 2011.
[14] Microsoft. Kinect sport season two. retrieved on
December 2011 from http://tinyu.me/EZFNk.
123