This paper advocates for a new type of augmented reality (AR) interface called Tangible AR. Tangible AR interfaces combine the enhanced display possibilities of AR with intuitive physical manipulation from tangible user interfaces (TUIs). Specifically, 1) each virtual object is registered to a corresponding physical object, and 2) users interact with virtual objects by manipulating the physical objects. The paper presents some prototype Tangible AR interfaces and argues they support seamless interaction between real and virtual worlds through natural physical manipulations.
This document discusses augmented reality (AR) interfaces for visual analytics. It provides an overview of AR, including its key characteristics of combining real and virtual images in an interactive and registered 3D space. Examples are given of medical imaging applications and trials of AR interfaces. Design principles for AR user interfaces are outlined, including using physical objects, appropriate interaction metaphors, and principles from tangible interfaces. Case studies demonstrate AR lenses and collaborative AR interfaces. The document concludes with a discussion of future directions such as mobile phone AR, collaborative AR, and ubiquitous VR (UbiVR).
This document discusses building usable augmented reality interfaces. It emphasizes understanding user needs through methods like focus groups and observations. Both virtual and physical interface elements must be considered, along with the interaction metaphor. Rapid prototyping and user testing are important to develop compelling AR experiences. Evaluation of AR applications is also discussed. The goal is to design AR that effectively merges virtual and real information while addressing usability issues.
Learning of robots by using & sharing the cloud computing techniquesEr. rahul abhishek
Robots are shaping a new era in the technology, many
algorithms has been developed for their learning which
they can apply to find the solution for the problems that
they are facing. That is, we have tried to provide them a
basic intelligence (like recognition, collision avoidance,
etc.). But besides that basic intelligence we are interested
in developing an intelligent system in which robots will
use their experiences and share it with each other just like
humans, who have got all its intelligence by his
experience and imaginations and sharing it with each
other. To implement this intelligent system we can use a
local server (word interchangeable to database) that will
be restricted to a robot only and a Cloud (global) server
where the authorized robots can upload their experiences
which can be used by every robot (which is authorized to
that server) to solve their problems.
A cognitive robot equipped with autonomous tool innovation expertise IJECEIAES
Like a human, a robot may benefit from being able to use a tool to solve a complex task. When an appropriate tool is not available, a very useful ability for a robot is to create a novel one based on its experience. With the advent of inexpensive 3D printing, it is now possible to give robots such an ability, at least to create simple tools. We proposed a method for learning how to use an object as a tool and, if needed, to design and construct a new tool. The robot began by learning an action model of tool use for a PDDL planner by observing a trainer. It then refined the model by learning by trial and error. Tool creation consisted of generalising an existing tool model and generating a novel tool by instantiating the general model. Further learning by experimentation was performed. Reducing the search space of potentially useful tools could be achieved by providing a tool ontology. We then used a constraint solver to obtain numerical parameters from abstract descriptions and use them for a ready-to-print design. We evaluated our system using a simulated and a real Baxter robot in two cases: hook and wedge. We found that our system performs tool creation successfully.
IRJET- CARES (Computerized Avatar for Rhetorical & Emotional Supervision)IRJET Journal
This document proposes the CARES (Computerized Avatar for Rhetorical and Emotional Supervision) framework. CARES aims to generate a 3D photo-realistic voice-enabled avatar that can express sentiments and emotions to act as a personal assistant. The avatar would use integrated aural and visual capabilities to provide emotional support to users. It would allow users to interact with virtual versions of deceased family members to stay connected. The framework builds upon prior work on avatar development, speech synthesis, and synchronizing facial animation with speech. It also incorporates techniques for modeling facial morphology and generating emotional transitions between different states.
Tollan xicocotitlan a reconstructed city by augmented reality ( extended )ijdms
Work In Terminal presents the analysis, design, implementation and results of Reconstruction Xicocotitlan
Tollan-through augmented reality (Extended), which will release information about the Toltec capital
supplemented by presenting an overview of the main premises of the Xicocotitlan Tollan city supported
dimensional models based on the augmented reality technique showing the user a virtual representation of
buildings in Tollan phase.
U-bike is an urban transportation system designed to integrate ubiquitous computing technologies and bicycles. It aims to (1) save human power energy generated from braking and return it back to the energy grid, and (2) distribute location-based information to riders through personal devices. The system collects energy from bicycles at kiosks and provides riders points proportional to their collected energy. Riders can access smart services and an "urban manual" of local information through their devices. Integrating U-bike with public transit could stretch transportation systems by eliminating short routes and reducing transfers between modes of transit.
This document discusses augmented reality (AR) interfaces for visual analytics. It provides an overview of AR, including its key characteristics of combining real and virtual images in an interactive and registered 3D space. Examples are given of medical imaging applications and trials of AR interfaces. Design principles for AR user interfaces are outlined, including using physical objects, appropriate interaction metaphors, and principles from tangible interfaces. Case studies demonstrate AR lenses and collaborative AR interfaces. The document concludes with a discussion of future directions such as mobile phone AR, collaborative AR, and ubiquitous VR (UbiVR).
This document discusses building usable augmented reality interfaces. It emphasizes understanding user needs through methods like focus groups and observations. Both virtual and physical interface elements must be considered, along with the interaction metaphor. Rapid prototyping and user testing are important to develop compelling AR experiences. Evaluation of AR applications is also discussed. The goal is to design AR that effectively merges virtual and real information while addressing usability issues.
Learning of robots by using & sharing the cloud computing techniquesEr. rahul abhishek
Robots are shaping a new era in the technology, many
algorithms has been developed for their learning which
they can apply to find the solution for the problems that
they are facing. That is, we have tried to provide them a
basic intelligence (like recognition, collision avoidance,
etc.). But besides that basic intelligence we are interested
in developing an intelligent system in which robots will
use their experiences and share it with each other just like
humans, who have got all its intelligence by his
experience and imaginations and sharing it with each
other. To implement this intelligent system we can use a
local server (word interchangeable to database) that will
be restricted to a robot only and a Cloud (global) server
where the authorized robots can upload their experiences
which can be used by every robot (which is authorized to
that server) to solve their problems.
A cognitive robot equipped with autonomous tool innovation expertise IJECEIAES
Like a human, a robot may benefit from being able to use a tool to solve a complex task. When an appropriate tool is not available, a very useful ability for a robot is to create a novel one based on its experience. With the advent of inexpensive 3D printing, it is now possible to give robots such an ability, at least to create simple tools. We proposed a method for learning how to use an object as a tool and, if needed, to design and construct a new tool. The robot began by learning an action model of tool use for a PDDL planner by observing a trainer. It then refined the model by learning by trial and error. Tool creation consisted of generalising an existing tool model and generating a novel tool by instantiating the general model. Further learning by experimentation was performed. Reducing the search space of potentially useful tools could be achieved by providing a tool ontology. We then used a constraint solver to obtain numerical parameters from abstract descriptions and use them for a ready-to-print design. We evaluated our system using a simulated and a real Baxter robot in two cases: hook and wedge. We found that our system performs tool creation successfully.
IRJET- CARES (Computerized Avatar for Rhetorical & Emotional Supervision)IRJET Journal
This document proposes the CARES (Computerized Avatar for Rhetorical and Emotional Supervision) framework. CARES aims to generate a 3D photo-realistic voice-enabled avatar that can express sentiments and emotions to act as a personal assistant. The avatar would use integrated aural and visual capabilities to provide emotional support to users. It would allow users to interact with virtual versions of deceased family members to stay connected. The framework builds upon prior work on avatar development, speech synthesis, and synchronizing facial animation with speech. It also incorporates techniques for modeling facial morphology and generating emotional transitions between different states.
Tollan xicocotitlan a reconstructed city by augmented reality ( extended )ijdms
Work In Terminal presents the analysis, design, implementation and results of Reconstruction Xicocotitlan
Tollan-through augmented reality (Extended), which will release information about the Toltec capital
supplemented by presenting an overview of the main premises of the Xicocotitlan Tollan city supported
dimensional models based on the augmented reality technique showing the user a virtual representation of
buildings in Tollan phase.
U-bike is an urban transportation system designed to integrate ubiquitous computing technologies and bicycles. It aims to (1) save human power energy generated from braking and return it back to the energy grid, and (2) distribute location-based information to riders through personal devices. The system collects energy from bicycles at kiosks and provides riders points proportional to their collected energy. Riders can access smart services and an "urban manual" of local information through their devices. Integrating U-bike with public transit could stretch transportation systems by eliminating short routes and reducing transfers between modes of transit.
This document discusses the potential for a 3D internet, or virtual worlds. It proposes an architecture for a 3D internet including world servers to provide content, avatar/ID servers to manage user identities, and universe location servers to provide virtual geographical information. A 3D internet could provide immersive experiences through spatial navigation of data and enable natural social interaction between users. Challenges include minimizing latency and ensuring security, privacy, and trust through identity management approaches. Potential applications discussed include virtual reality environments for shopping, socializing, and interacting with internet content in 3D.
IRJET-Augmented Reality based Platform to Share Virtual WorldsIRJET Journal
This document discusses using augmented reality and virtual reality technologies to create a shared virtual world platform. It proposes combining Google Cardboard and TensorFlow Lite APIs to allow multiple users to simultaneously connect and interact with virtual content overlaid on the real world. The platform would use cloud computing and real-time databases to securely share information and dynamic virtual objects between users in augmented reality.
The document discusses gesture-based computing as an alternative to mouse input for human-computer interaction. It proposes a novel approach for implementing a real-time gesture recognition system capable of understanding commands based on analyzing the principal contour and fingertips of hand gestures. Vision-based gesture recognition techniques are discussed that do not require additional devices for users to interact with computers through natural hand motions.
FUTURISTIC TECHNOLOGY IN ARCHITECTURE & PLANNING - AUGMENTED AND VIRTUAL REAL...ijscai
Speed has become a way of life. We are asymptotically piling data. Speed can be achieved with new design
processes, techniques, and Technology. Innovations AR and VR are just some of the many forms of
technologies that will play a key role in shaping the Architecture and Planning of tomorrow, making it
future-ready and ushering in a new age of innovation. AR and VR in Architecture & Planning were
introduced as assisting tools and has helped generate multiple design options, expanded possibilities of
visualization, and provided us with more enhanced, detailed, and specific experience in real-time; enabling
us to see the resultsof work on hand well before the commencement of the project. These tools are further
developed for city development decisions, helping citizens interact with local authorities, access public
services, and plan their commute. After reviewing multiple research papers, it had been observed that each
one is moving forward with the changes brought by it, without entirely understanding its role. This paper
provides a summary of theappliance of AR & VR in architecture and planning.
Presented live at FITC Amsterdam 2014 on Feb 24-25, 2014
More details can be found at www.FITC.ca
Interactive experiences that sustain user interest and imagination
For the last couple of years, Stefan Wagner has been working in the field of interactive exhibits, including installations and software that give the audience an understanding of specific (often hard to grasp) topics. Ranging between interactive design, information visualization and creative coding, Stefan learned to love the concept of putting the user in a place where he is not merely a consumer of media, but where he gets to actively and willingly explore the content presented to him – applications that sustain user interest and imagination. In the course of his studies as well as in commercial projects, he has tried to adopt this idea.
In his talk, Stefan Wagner will give you an insight into the development of some of the interactive projects he has been working or collaborating on and how they managed to catch user attention by deploying realtime visuals and/or new concepts of interaction like augmented reality and gesture-based input. He will also discuss conceptual as well as technical issues, problems and insights of these projects.
This document provides an overview of interaction design and human-computer interaction. It discusses how interaction design has evolved beyond traditional human-computer interaction to include new paradigms like ubiquitous and pervasive computing using wireless and collaborative technologies. The book aims to be up-to-date by including many examples of contemporary research. It defines interaction design as designing interactive products to support people in their everyday lives. The book includes 15 chapters that cover cognitive, social, and affective issues in interaction design and emphasize an iterative design process relying on both theory and practice. It is accompanied by a website providing additional resources, interactivities, and exercises for learning interaction design.
3D Internet in Web 3.0 is one of the most important technologies world is looking forward to. Generally, we do our things manually in the daily life, which can be said to be in the form of 3D. But when it comes to internet we are actually using it in the form of 2D rather than 3D, hence this concept i.e. 3D Internet helps in achieving that.
Cross-Media Information Spaces and Architectures (CISA)Beat Signer
Research on cross-media information spaces and architectures covering interactive paper, personal information management, data physicalisation, document engineering, gesture recognition, presentation tools, next generation user interfaces and other topics.
The document describes PrintScreen, a method for non-experts to design and print highly customized thin-film touch displays using electroluminescence. Key contributions include:
1) A screen printing or inkjet printing process to create 120um thick, bendable displays on various substrates like paper or plastic in custom shapes and colors.
2) A design approach based on 2D graphics to integrate printed display elements with static artwork.
3) A sensing framework using the printed display for touch input.
Example applications demonstrate uses in ubiquitous, mobile, and wearable computing. Evaluation shows the bright, flexible displays are robust to bending.
This document discusses 3D printing and additive manufacturing. It begins by defining 3D printing as a process that creates 3D objects by laying down successive layers of material. Next, it explains how 3D printing works by using CAD software to virtually design an object and slice it into layers to guide the printer. The document also outlines some key benefits of 3D printing like increased innovation and faster time to market. Finally, it provides examples of industrial and domestic applications of 3D printing technology.
This document summarizes a 3D printing workshop that covered the current state of desktop 3D fabrication and 3D printing. It discussed how 3D printing has evolved from expensive industrial tools to more affordable desktop 3D printers, enabled by projects like RepRap and companies like MakerBot. The workshop taught basics of 3D modeling for printing and discussed how people are using 3D printing for custom enclosures, extensions, branding, and problem solving. Participants learned hands-on 3D printing and were assigned a tutorial to design an object to print.
This document provides an overview of different narrative design languages that could be used to analyze tabletop role-playing games (RPGs) and translate RPG design principles to digital interactive narratives. It discusses Joseph Campbell's monomyth framework, Vladimir Propp's morphology of folktales, the P.I.N.G. model that plots narratives on axes of passive-interactive and narrative-game, and Harmut Koenitz's three-axis language focusing on agency, dramatic agency, and narrative complexity. The document determines that Koenitz's language is the most suitable for this research as it provides a simplified structure to analyze the wide variety of styles within RPGs and compare their design to digital interactive narratives.
This document discusses intelligent computing relating to cloud computing. It introduces applying artificial intelligence to cloud computing to develop self-managing computer systems. For example, developing software that regulates computer power consumption to reduce energy use. The document also discusses using affective computing and advanced intelligence to improve cloud computing efficiency by allowing applications to anticipate situations and make real-time decisions over the internet. Finally, it proposes that true cloud computing should be based on natural language understanding to allow access via lightweight devices like phones, not just traditional computers.
01 Computational Design and Digital FabricationAyele Bedada
The document discusses computational design and digital fabrication. It defines computational design and discusses principles of computational design according to various experts. Computational design allows complex geometry, mass customization, and integration of fabrication constraints. It discusses how computational design transforms material from passive to active and allows bottom-up design processes. Several design projects utilizing computational design are also summarized.
Augmented reality techniques can enhance collaboration by providing spatial cues that improve awareness of partners' actions. Studies found AR collaboration mimicked natural face-to-face interaction more than video conferencing through gestures, speech patterns and subjective feedback. Future collaborative AR systems aim to seamlessly bridge the physical and virtual through wearable displays and tangible interactions to support remote and co-located collaboration.
This document discusses the design of augmented reality interfaces. It begins by describing different types of AR interfaces such as browsing interfaces, 3D interfaces, tangible interfaces, and tangible AR interfaces. It then discusses specific interface design considerations for AR like using physical objects as controls for virtual objects. The document provides examples of space-multiplexed and time-multiplexed tangible AR interfaces. It emphasizes designing AR interfaces using principles from tangible user interfaces. Overall, the document provides guidance on conceptualizing and building effective AR experiences through consideration of physical components, display elements, and interaction metaphors.
Separation of Organic User Interfaces: Envisioning the Diversity of Programma...Felix Epp
Emerging technologies in nanotechnology, material science and mi- cro-robotics will make Programmable Matter possible. This creates the vision of transformable user interfaces as the successor of today’s user interfaces. This theoretical work discusses the new concepts in creating these interfaces. Instead of creating a singular device for the various use-cases and contents, future inter- faces will focus more on representing the underlying content and mental models than on a certain technique. This will lead to a vast variety of physical devices each representing one virtual entity and forming the overall user interaction in combination with each other. The concept of TiID – Time Interface Device – exemplifies such a device by including all time dependent actions and attributes in one device.
This document provides a literature review and overview of augmented reality (AR) technologies. It discusses several enabling technologies for AR, including head-mounted displays, handheld displays, and tracking methods. The document reviews research on AR development tools, applications of AR in education, and challenges and future directions for AR. Key advantages of AR include enhancing real-world experiences and making invisible objects observable. However, challenges include cognitive overload and usability difficulties for some users. Overall, the literature indicates that AR has significant potential to improve education and other fields when designed appropriately.
This document summarizes AR interaction techniques, including 2D and 3D interaction tasks, AR interfaces as data browsers, 3D AR interfaces, augmented surfaces and tangible interfaces, and architecture for AR interaction design. Some key points discussed are:
- 2D interaction tasks include selection, text entry, quantification, and positioning, while 3D tasks include navigation, selection, manipulation, and system control.
- AR interfaces can function as data browsers by registering virtual objects to the real world for manipulation and browsing of information.
- Tangible interfaces use physical objects as controls for virtual objects to support collaboration and establish shared meaning.
- Designing AR systems requires considering both physical and virtual interface components, display elements,
This document discusses the potential for a 3D internet, or virtual worlds. It proposes an architecture for a 3D internet including world servers to provide content, avatar/ID servers to manage user identities, and universe location servers to provide virtual geographical information. A 3D internet could provide immersive experiences through spatial navigation of data and enable natural social interaction between users. Challenges include minimizing latency and ensuring security, privacy, and trust through identity management approaches. Potential applications discussed include virtual reality environments for shopping, socializing, and interacting with internet content in 3D.
IRJET-Augmented Reality based Platform to Share Virtual WorldsIRJET Journal
This document discusses using augmented reality and virtual reality technologies to create a shared virtual world platform. It proposes combining Google Cardboard and TensorFlow Lite APIs to allow multiple users to simultaneously connect and interact with virtual content overlaid on the real world. The platform would use cloud computing and real-time databases to securely share information and dynamic virtual objects between users in augmented reality.
The document discusses gesture-based computing as an alternative to mouse input for human-computer interaction. It proposes a novel approach for implementing a real-time gesture recognition system capable of understanding commands based on analyzing the principal contour and fingertips of hand gestures. Vision-based gesture recognition techniques are discussed that do not require additional devices for users to interact with computers through natural hand motions.
FUTURISTIC TECHNOLOGY IN ARCHITECTURE & PLANNING - AUGMENTED AND VIRTUAL REAL...ijscai
Speed has become a way of life. We are asymptotically piling data. Speed can be achieved with new design
processes, techniques, and Technology. Innovations AR and VR are just some of the many forms of
technologies that will play a key role in shaping the Architecture and Planning of tomorrow, making it
future-ready and ushering in a new age of innovation. AR and VR in Architecture & Planning were
introduced as assisting tools and has helped generate multiple design options, expanded possibilities of
visualization, and provided us with more enhanced, detailed, and specific experience in real-time; enabling
us to see the resultsof work on hand well before the commencement of the project. These tools are further
developed for city development decisions, helping citizens interact with local authorities, access public
services, and plan their commute. After reviewing multiple research papers, it had been observed that each
one is moving forward with the changes brought by it, without entirely understanding its role. This paper
provides a summary of theappliance of AR & VR in architecture and planning.
Presented live at FITC Amsterdam 2014 on Feb 24-25, 2014
More details can be found at www.FITC.ca
Interactive experiences that sustain user interest and imagination
For the last couple of years, Stefan Wagner has been working in the field of interactive exhibits, including installations and software that give the audience an understanding of specific (often hard to grasp) topics. Ranging between interactive design, information visualization and creative coding, Stefan learned to love the concept of putting the user in a place where he is not merely a consumer of media, but where he gets to actively and willingly explore the content presented to him – applications that sustain user interest and imagination. In the course of his studies as well as in commercial projects, he has tried to adopt this idea.
In his talk, Stefan Wagner will give you an insight into the development of some of the interactive projects he has been working or collaborating on and how they managed to catch user attention by deploying realtime visuals and/or new concepts of interaction like augmented reality and gesture-based input. He will also discuss conceptual as well as technical issues, problems and insights of these projects.
This document provides an overview of interaction design and human-computer interaction. It discusses how interaction design has evolved beyond traditional human-computer interaction to include new paradigms like ubiquitous and pervasive computing using wireless and collaborative technologies. The book aims to be up-to-date by including many examples of contemporary research. It defines interaction design as designing interactive products to support people in their everyday lives. The book includes 15 chapters that cover cognitive, social, and affective issues in interaction design and emphasize an iterative design process relying on both theory and practice. It is accompanied by a website providing additional resources, interactivities, and exercises for learning interaction design.
3D Internet in Web 3.0 is one of the most important technologies world is looking forward to. Generally, we do our things manually in the daily life, which can be said to be in the form of 3D. But when it comes to internet we are actually using it in the form of 2D rather than 3D, hence this concept i.e. 3D Internet helps in achieving that.
Cross-Media Information Spaces and Architectures (CISA)Beat Signer
Research on cross-media information spaces and architectures covering interactive paper, personal information management, data physicalisation, document engineering, gesture recognition, presentation tools, next generation user interfaces and other topics.
The document describes PrintScreen, a method for non-experts to design and print highly customized thin-film touch displays using electroluminescence. Key contributions include:
1) A screen printing or inkjet printing process to create 120um thick, bendable displays on various substrates like paper or plastic in custom shapes and colors.
2) A design approach based on 2D graphics to integrate printed display elements with static artwork.
3) A sensing framework using the printed display for touch input.
Example applications demonstrate uses in ubiquitous, mobile, and wearable computing. Evaluation shows the bright, flexible displays are robust to bending.
This document discusses 3D printing and additive manufacturing. It begins by defining 3D printing as a process that creates 3D objects by laying down successive layers of material. Next, it explains how 3D printing works by using CAD software to virtually design an object and slice it into layers to guide the printer. The document also outlines some key benefits of 3D printing like increased innovation and faster time to market. Finally, it provides examples of industrial and domestic applications of 3D printing technology.
This document summarizes a 3D printing workshop that covered the current state of desktop 3D fabrication and 3D printing. It discussed how 3D printing has evolved from expensive industrial tools to more affordable desktop 3D printers, enabled by projects like RepRap and companies like MakerBot. The workshop taught basics of 3D modeling for printing and discussed how people are using 3D printing for custom enclosures, extensions, branding, and problem solving. Participants learned hands-on 3D printing and were assigned a tutorial to design an object to print.
This document provides an overview of different narrative design languages that could be used to analyze tabletop role-playing games (RPGs) and translate RPG design principles to digital interactive narratives. It discusses Joseph Campbell's monomyth framework, Vladimir Propp's morphology of folktales, the P.I.N.G. model that plots narratives on axes of passive-interactive and narrative-game, and Harmut Koenitz's three-axis language focusing on agency, dramatic agency, and narrative complexity. The document determines that Koenitz's language is the most suitable for this research as it provides a simplified structure to analyze the wide variety of styles within RPGs and compare their design to digital interactive narratives.
This document discusses intelligent computing relating to cloud computing. It introduces applying artificial intelligence to cloud computing to develop self-managing computer systems. For example, developing software that regulates computer power consumption to reduce energy use. The document also discusses using affective computing and advanced intelligence to improve cloud computing efficiency by allowing applications to anticipate situations and make real-time decisions over the internet. Finally, it proposes that true cloud computing should be based on natural language understanding to allow access via lightweight devices like phones, not just traditional computers.
01 Computational Design and Digital FabricationAyele Bedada
The document discusses computational design and digital fabrication. It defines computational design and discusses principles of computational design according to various experts. Computational design allows complex geometry, mass customization, and integration of fabrication constraints. It discusses how computational design transforms material from passive to active and allows bottom-up design processes. Several design projects utilizing computational design are also summarized.
Augmented reality techniques can enhance collaboration by providing spatial cues that improve awareness of partners' actions. Studies found AR collaboration mimicked natural face-to-face interaction more than video conferencing through gestures, speech patterns and subjective feedback. Future collaborative AR systems aim to seamlessly bridge the physical and virtual through wearable displays and tangible interactions to support remote and co-located collaboration.
This document discusses the design of augmented reality interfaces. It begins by describing different types of AR interfaces such as browsing interfaces, 3D interfaces, tangible interfaces, and tangible AR interfaces. It then discusses specific interface design considerations for AR like using physical objects as controls for virtual objects. The document provides examples of space-multiplexed and time-multiplexed tangible AR interfaces. It emphasizes designing AR interfaces using principles from tangible user interfaces. Overall, the document provides guidance on conceptualizing and building effective AR experiences through consideration of physical components, display elements, and interaction metaphors.
Separation of Organic User Interfaces: Envisioning the Diversity of Programma...Felix Epp
Emerging technologies in nanotechnology, material science and mi- cro-robotics will make Programmable Matter possible. This creates the vision of transformable user interfaces as the successor of today’s user interfaces. This theoretical work discusses the new concepts in creating these interfaces. Instead of creating a singular device for the various use-cases and contents, future inter- faces will focus more on representing the underlying content and mental models than on a certain technique. This will lead to a vast variety of physical devices each representing one virtual entity and forming the overall user interaction in combination with each other. The concept of TiID – Time Interface Device – exemplifies such a device by including all time dependent actions and attributes in one device.
This document provides a literature review and overview of augmented reality (AR) technologies. It discusses several enabling technologies for AR, including head-mounted displays, handheld displays, and tracking methods. The document reviews research on AR development tools, applications of AR in education, and challenges and future directions for AR. Key advantages of AR include enhancing real-world experiences and making invisible objects observable. However, challenges include cognitive overload and usability difficulties for some users. Overall, the literature indicates that AR has significant potential to improve education and other fields when designed appropriately.
This document summarizes AR interaction techniques, including 2D and 3D interaction tasks, AR interfaces as data browsers, 3D AR interfaces, augmented surfaces and tangible interfaces, and architecture for AR interaction design. Some key points discussed are:
- 2D interaction tasks include selection, text entry, quantification, and positioning, while 3D tasks include navigation, selection, manipulation, and system control.
- AR interfaces can function as data browsers by registering virtual objects to the real world for manipulation and browsing of information.
- Tangible interfaces use physical objects as controls for virtual objects to support collaboration and establish shared meaning.
- Designing AR systems requires considering both physical and virtual interface components, display elements,
Revolutionizing Creativity and Communication: Introducing Air CanvasIRJET Journal
1. The document introduces Air Canvas, an interactive drawing platform that allows users to paint in mid-air using hand gestures detected through computer vision techniques.
2. It leverages OpenCV and MediaPipe to monitor real-time hand movements with precision and translate hand gestures into strokes on a digital canvas.
3. This work underscores the potential of computer vision and gesture recognition for immersive, creative applications that bridge the gap between technology and art.
SVHsIEVs for Navigation in Virtual Urban Environmentcsandit
Many virtual reality applications, such as training, urban design or gaming are based on a rich
semantic description of the environment. This paper describes a new representation of semantic
virtual worlds. Our model, called SVHsIEVs1
should provide a consistent representation of the
following aspects: the simulated environment, its structure, and the knowledge items using
ontology, interactions and tasks that virtual humans can perform in the environment. Our first
main contribution is to show the influence of semantic virtual objects on the environment. Our
second main contribution is to use these semantic informations to manage he tasks of each
virtual object. We propose to define each task by a set of attributes and relationships, which
determines the links between attributes in tasks, and links between other tasks. The architecture
has been successfully tested in 3D dynamic environments for navigation in virtual urban
environments.
Many virtual reality applications, such as training, urban design or gaming are based on a rich
semantic description of the environment. This paper describes a new representation of semantic
virtual worlds. Our model, called SVHsIEVs1should provide a consistent representation of the
following aspects: the simulated environment, its structure, and the knowledge items using
ontology, interactions and tasks that virtual humans can perform in the environment. Our first
main contribution is to show the influence of semantic virtual objects on the environment. Our
second main contribution is to use these semantic informations to manage he tasks of each
virtual object. We propose to define each task by a set of attributes and relationships, which
determines the links between attributes in tasks, and links between other tasks. The architecture
has been successfully tested in 3D dynamic environments for navigation in virtual urban
environments.
FUTURISTIC TECHNOLOGY IN ARCHITECTURE & PLANNING - AUGMENTED AND VIRTUAL REAL...ijscai
Speed has become a way of life. We are asymptotically piling data. Speed can be achieved with new design
processes, techniques, and Technology. Innovations AR and VR are just some of the many forms of
technologies that will play a key role in shaping the Architecture and Planning of tomorrow, making it
future-ready and ushering in a new age of innovation. AR and VR in Architecture & Planning were
introduced as assisting tools and has helped generate multiple design options, expanded possibilities of
visualization, and provided us with more enhanced, detailed, and specific experience in real-time; enabling
us to see the resultsof work on hand well before the commencement of the project. These tools are further
developed for city development decisions, helping citizens interact with local authorities, access public
services, and plan their commute. After reviewing multiple research papers, it had been observed that each
one is moving forward with the changes brought by it, without entirely understanding its role. This paper
provides a summary of theappliance of AR & VR in architecture and planning.
FUTURISTIC TECHNOLOGY IN ARCHITECTURE & PLANNING - AUGMENTED AND VIRTUAL REAL...ijscai
Speed has become a way of life. We are asymptotically piling data. Speed can be achieved with new design
processes, techniques, and Technology. Innovations AR and VR are just some of the many forms of
technologies that will play a key role in shaping the Architecture and Planning of tomorrow, making it
future-ready and ushering in a new age of innovation. AR and VR in Architecture & Planning were
introduced as assisting tools and has helped generate multiple design options, expanded possibilities of
visualization, and provided us with more enhanced, detailed, and specific experience in real-time; enabling
us to see the resultsof work on hand well before the commencement of the project. These tools are further
developed for city development decisions, helping citizens interact with local authorities, access public
services, and plan their commute. After reviewing multiple research papers, it had been observed that each
one is moving forward with the changes brought by it, without entirely understanding its role. This paper
provides a summary of theappliance of AR & VR in architecture and planning.
FUTURISTIC TECHNOLOGY IN ARCHITECTURE & PLANNING - AUGMENTED AND VIRTUAL REAL...ijscai
This document provides an overview of augmented reality (AR) and virtual reality (VR) technologies and their applications in architecture and urban planning. It first discusses how AR and VR have helped generate multiple design options, improve visualization, and provide more detailed virtual experiences of projects before they are built. It then reviews several research papers on the topics of AR/VR in design. Key applications discussed include enabling better collaboration and ideation, analyzing spatial relationships, and allowing public involvement in design decisions. The document also outlines some challenges to wider adoption of these technologies, such as technical limitations of equipment and the need for more specialized tools tailored for the design field.
Design and Evaluation Case Study: Evaluating The Kinect Device In The Task of...Waqas Tariq
This document describes a study that evaluated the Microsoft Kinect device for natural interaction in an information visualization system called MetricSPlat. The researchers hypothesized that Kinect would enable more efficient interaction than a mouse for tasks like identifying clusters and outliers in multidimensional data projections. They used a participatory design process with users to develop an interaction scheme for controlling MetricSPlat with Kinect gestures. Usability tests were conducted during design to evaluate each iteration. After finalizing the Kinect scheme, comparative usability tests were performed between Kinect and mouse. The results found that while users reported high satisfaction with Kinect, it was less efficient than the mouse in terms of task completion times and precision for the specific visualization tasks in the
Design and Evaluation Case Study: Evaluating The Kinect Device In The Task of...Waqas Tariq
We verify the hypothesis that Microsoft’s Kinect device is tailored for defining more efficient interaction compared to the commodity mouse device in the context of information visualization. For this goal, we used Kinect during interaction design and evaluation considering an application on information visualization (over agrometeorological, cars, and flowers datasets). The devices were tested over a visualization technique based on clouds of points (multidimensional projection) that can be manipulated by rotation, scaling, and translation. The design was carried according to technique Participatory Design (ISO 13407) and the evaluation answered to a vast set of Usability Tests. In the tests, the users reported high satisfaction scores (easiness and preference) but, also, they signed out with low efficiency scores (time and precision). In the specific context of a multidimensional-projection visualization, our conclusion is that, in respect to user acceptance, Kinect is a device adequate for natural interaction; but, for desktop-based production, it still cannot compete with the traditional long-term mouse design.
Design and Evaluation Case Study: Evaluating The Kinect Device In The Task of...Waqas Tariq
We verify the hypothesis that Microsoft’s Kinect device is tailored for defining more efficient interaction compared to the commodity mouse device in the context of information visualization. For this goal, we used Kinect during interaction design and evaluation considering an application on information visualization (over agrometeorological, cars, and flowers datasets). The devices were tested over a visualization technique based on clouds of points (multidimensional projection) that can be manipulated by rotation, scaling, and translation. The design was carried according to technique Participatory Design (ISO 13407) and the evaluation answered to a vast set of Usability Tests. In the tests, the users reported high satisfaction scores (easiness and preference) but, also, they signed out with low efficiency scores (time and precision). In the specific context of a multidimensional-projection visualization, our conclusion is that, in respect to user acceptance, Kinect is a device adequate for natural interaction; but, for desktop-based production, it still cannot compete with the traditional long-term mouse design.
This document provides an overview of physical and embodied user interfaces. It begins with definitions of physical user interfaces and a brief history of their emergence related to ubiquitous computing. It then discusses frameworks for designing physical interfaces, including MCRit and Reality-Based Interaction, that aim to combine physical and digital aspects. The document showcases several research projects applying physical interfaces in domains like learning, social interaction, and information visualization. It concludes by outlining pros like increased usability and enjoyment, as well as challenges like requiring design of both physical and digital aspects and accounting for social/cultural contexts of physical interaction.
Augmented Reality with litrature review.pptxMeghaGambhire
Augmented reality (AR) is an enhanced version of the real physical world that is achieved through the use of digital visual elements, sound, or other sensory stimuli delivered via technology
This document summarizes a study that evaluated the impact of field of view (FOV) and tracking artifacts on user performance in an augmented reality (AR) information-seeking task. The study used a virtual reality (VR) system to simulate different FOV conditions (full vs constrained) and the presence or absence of mild tracking artifacts. Participants completed information retrieval tasks in a simulated AR tourism site. Results found that a constrained FOV significantly increased task completion time, and older participants were more impacted by tracking artifacts than younger ones. The study demonstrated the potential impact of FOV and tracking on AR design and experience.
Augmented Reality AR is the technology that overlaps virtual objects onto real world objects. It has three main features the combination of the real world and the virtual world, real time interaction, and 3D registration. The algorithms used to produce graphical images and other sensor based inputs on real world objects uses the camera of your device. The shortest route and graphics information in 3D is not notified in normal maps. To avoid such problems we have developed a 3D virtual environment that gives graphics and contains more information about a particular place. This project is done by using the “UNITY†application, the engine can be used to create three dimensional, two dimensional which helps to view all these graphics and routes in a 3D view. R. Mohana Priya | Subash. R | Yogesh. R | Vignesh. M | Gopi. V "Augmented Reality Map" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-3 , April 2020, URL: https://www.ijtsrd.com/papers/ijtsrd30220.pdf Paper Url :https://www.ijtsrd.com/engineering/electronics-and-communication-engineering/30220/augmented-reality-map/r-mohana-priya
ORGANIC USER INTERFACES: FRAMEWORK, INTERACTION MODEL AND DESIGN GUIDELINESijasuc
Under the umbrella of Ubiquitous Computing, lies the fields of natural, organic and tangible user
interfaces. Although some work have been made in organic user interface (OUI) design principles, no
formalized framework have been set for OUIs and their interaction model, or design-specific guidelines, as
far as we know. In this paper we propose an OUI framework by which we deduced the developed
interaction model for organic systems design. Moreover we recommended three main design principles for
OUI design, in addition to a set of design-specific guidelines for each type of our interaction model.
This document proposes a method for real-time hand tracking using skin color detection and blob matching. It detects hands by identifying skin color pixels and moving areas between frames. For tracking, it matches skin color blobs between frames based on color distribution and predefined tracking rules. The method aims to provide fast and stable hand tracking for human-computer interaction applications. Experimental results showed the algorithm performed well in real-time.
The document describes experiments conducted to develop new interfaces for networked games that make virtual and real spaces seem seamless. It discusses two experiments: 1) A marker-based interface for an online dice game that uses physical dice with markers tracked by a camera. 2) A marker-less interface for an online trading card game that uses real trading cards instead of virtual ones. The goal is for the interfaces to better reflect natural human behaviors and increase immersion in the games.
The document discusses augmented reality (AR) and its potential applications. It defines AR as using transparent head-mounted displays to overlay computer-generated images onto the physical environment. It mentions how AR could be used to overlay additional information and knowledge to enhance learning. The document also discusses different types of AR including wearable, spatial, hand-held, and projection-based AR and some examples of technologies in each category. It raises issues about using AR on mobile devices and discusses how AR could enable new forms of collaboration and interaction between people.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Conversational agents, or chatbots, are increasingly used to access all sorts of services using natural language. While open-domain chatbots - like ChatGPT - can converse on any topic, task-oriented chatbots - the focus of this paper - are designed for specific tasks, like booking a flight, obtaining customer support, or setting an appointment. Like any other software, task-oriented chatbots need to be properly tested, usually by defining and executing test scenarios (i.e., sequences of user-chatbot interactions). However, there is currently a lack of methods to quantify the completeness and strength of such test scenarios, which can lead to low-quality tests, and hence to buggy chatbots.
To fill this gap, we propose adapting mutation testing (MuT) for task-oriented chatbots. To this end, we introduce a set of mutation operators that emulate faults in chatbot designs, an architecture that enables MuT on chatbots built using heterogeneous technologies, and a practical realisation as an Eclipse plugin. Moreover, we evaluate the applicability, effectiveness and efficiency of our approach on open-source chatbots, with promising results.
Essentials of Automations: Exploring Attributes & Automation ParametersSafe Software
Building automations in FME Flow can save time, money, and help businesses scale by eliminating data silos and providing data to stakeholders in real-time. One essential component to orchestrating complex automations is the use of attributes & automation parameters (both formerly known as “keys”). In fact, it’s unlikely you’ll ever build an Automation without using these components, but what exactly are they?
Attributes & automation parameters enable the automation author to pass data values from one automation component to the next. During this webinar, our FME Flow Specialists will cover leveraging the three types of these output attributes & parameters in FME Flow: Event, Custom, and Automation. As a bonus, they’ll also be making use of the Split-Merge Block functionality.
You’ll leave this webinar with a better understanding of how to maximize the potential of automations by making use of attributes & automation parameters, with the ultimate goal of setting your enterprise integration workflows up on autopilot.
Northern Engraving | Nameplate Manufacturing Process - 2024Northern Engraving
Manufacturing custom quality metal nameplates and badges involves several standard operations. Processes include sheet prep, lithography, screening, coating, punch press and inspection. All decoration is completed in the flat sheet with adhesive and tooling operations following. The possibilities for creating unique durable nameplates are endless. How will you create your brand identity? We can help!
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
Connector Corner: Seamlessly power UiPath Apps, GenAI with prebuilt connectorsDianaGray10
Join us to learn how UiPath Apps can directly and easily interact with prebuilt connectors via Integration Service--including Salesforce, ServiceNow, Open GenAI, and more.
The best part is you can achieve this without building a custom workflow! Say goodbye to the hassle of using separate automations to call APIs. By seamlessly integrating within App Studio, you can now easily streamline your workflow, while gaining direct access to our Connector Catalog of popular applications.
We’ll discuss and demo the benefits of UiPath Apps and connectors including:
Creating a compelling user experience for any software, without the limitations of APIs.
Accelerating the app creation process, saving time and effort
Enjoying high-performance CRUD (create, read, update, delete) operations, for
seamless data management.
Speakers:
Russell Alfeche, Technology Leader, RPA at qBotic and UiPath MVP
Charlie Greenberg, host
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Discover top-tier mobile app development services, offering innovative solutions for iOS and Android. Enhance your business with custom, user-friendly mobile applications.
Digital Banking in the Cloud: How Citizens Bank Unlocked Their MainframePrecisely
Inconsistent user experience and siloed data, high costs, and changing customer expectations – Citizens Bank was experiencing these challenges while it was attempting to deliver a superior digital banking experience for its clients. Its core banking applications run on the mainframe and Citizens was using legacy utilities to get the critical mainframe data to feed customer-facing channels, like call centers, web, and mobile. Ultimately, this led to higher operating costs (MIPS), delayed response times, and longer time to market.
Ever-changing customer expectations demand more modern digital experiences, and the bank needed to find a solution that could provide real-time data to its customer channels with low latency and operating costs. Join this session to learn how Citizens is leveraging Precisely to replicate mainframe data to its customer channels and deliver on their “modern digital bank” experiences.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
In the realm of cybersecurity, offensive security practices act as a critical shield. By simulating real-world attacks in a controlled environment, these techniques expose vulnerabilities before malicious actors can exploit them. This proactive approach allows manufacturers to identify and fix weaknesses, significantly enhancing system security.
This presentation delves into the development of a system designed to mimic Galileo's Open Service signal using software-defined radio (SDR) technology. We'll begin with a foundational overview of both Global Navigation Satellite Systems (GNSS) and the intricacies of digital signal processing.
The presentation culminates in a live demonstration. We'll showcase the manipulation of Galileo's Open Service pilot signal, simulating an attack on various software and hardware systems. This practical demonstration serves to highlight the potential consequences of unaddressed vulnerabilities, emphasizing the importance of offensive security practices in safeguarding critical infrastructure.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
1. Tangible Augmented Reality
Mark Billinghurst Hirokazu Kato Ivan Poupyrev
HIT Laboratory Faculty of Information Sciences Interaction Lab
University of Washington Hiroshima City University Sony CSL
Box 352-142, Seattle 3-4-1, Ozaka-higashi, Asaminami-ku 3-14-13 Higashi-Gotanda
WA 98195, USA Hiroshima 731-3194, JAPAN Tokyo 141-0022, JAPAN
grof@hitl.washington.edu kato@sys.im.hiroshima-cu.ac.jp poup@csl.sony.co.
ABSTRACT In this paper we advocate a new approach to designing AR
This paper advocates a new metaphor for designing three- interfaces that we refer to as Tangible Augmented Reality
dimensional Augmented Reality (AR) applications, Tangible (Tangible AR). Tangible AR interfaces are those in which 1)
Augmented Reality (Tangible AR). Tangible AR interfaces each virtual object is registered to a physical object (Figure
combine the enhanced display possibilities of AR with the 1) and 2) the user interacts with virtual objects by
intuitive manipulation and interaction of physical objects or manipulating the corresponding tangible objects. In the
Tangible User Interfaces. We define what Tangible AR Tangible AR approach the physical objects and interactions
interfaces are, present some design guidelines and are equally as important as the virtual imagery and provide a
prototype interfaces based on these guidelines. Experiences very intuitive way to interact with the AR interface.
with these interfaces show that the Tangible AR metaphor
supports seamless interaction between the real and virtual
worlds, and provides a range of natural interactions that are
difficult to find in other AR interfaces.
CR Categories: H.5.2 [User Interfaces] Input devices and
strategies; H.5.1 [Multimedia Information Systems]
Artificial, augmented , and virtual realities.
Keywords: Augmented Reality, Collaboration, CSCW.
Additional Keywords: Tangible User Interfaces
1 INTRODUCTION
In 1965 Ivan Sutherland built the first head mounted display
and used it to show a simple wireframe cube overlaid on the
real world [1], creating the first Augmented Reality (AR)
interface. We use the term Augmented Reality to mean
interfaces in which three-dimensional computer graphics are Figure 1: The user interacts with 3D virtual objects by
superimposed over real objects, typically viewed through manipulating a tangible object; such as a simple paper card.
headmounted or handheld displays. Our work derives large part of its inspiration from Ishii’s
Today, computers are incomparably faster and graphics are Tangible Media group [3]. The many projects developed by
almost lifelike, but in many ways AR interfaces are still in this group are developed around the notion of the Tangible
their infancy. As Ishii says, the AR field has been primarily User Interface (TUI) in which real world objects are used as
concerned with “..considering purely visual augmentations” computer input and output devices, or as Ishii puts it “by
[2] and while great advances have been made in AR display coupling digital information to everyday physical objects
technologies and tracking techniques, interaction with AR and environments” [2]. The Tangible AR approach builds
environments has been usually limited to either passive on the principles suggested by TUI by coupling an AR
viewing or simple browsing of virtual information registered visual display to a tangible physical interface.
to the real world. Few systems provide tools that let the In the remainder of this paper we first review related work
user interact, request or modify this information effectively and describe the need for new AR interface metaphors. We
and in real time. Furthermore, even basic interaction tasks, then describe our notion of Tangible AR in more detail and
such as manipulation, copying, annotating, dynamically outline some Tangible AR design principles. Next we show
adding and deleting virtual objects to the AR scene have how these design principles have been applied in a variety
been poorly addressed. of prototype applications and describe the technology
involved.
2. Note that some of the interfaces described have previously everyday physical objects for effective manipulation of
been written about in other publications. However this is virtual objects.
the first time that they have been used together as examples
For over a decade researchers have been investigating
of the Tangible AR design approach. This paper aims to
computer interfaces based on real objects. We have seen
generalize scattered results, report new work, and present
the development of the Digital Desktop [ 16], ubiquitous
the conceptual vision that has been driving our exploration.
computing [17] and Tangible User Interfaces (TUI) [2]
2 BACKGROUND among others. The goal of these efforts is to make the
When a new interface medium is developed it typically computer vanish into familiar real world objects or the
progresses through the following stages: environment. As with VR interfaces, recent Tangible User
1. Prototype Demonstration Interfaces employ unique interface metaphors in areas such
2. Adoption of Interaction techniques from as 3D content creation [18], and work has begun on
other interface metaphors developing and testing models of user interaction [19].
3. Development of new interface metaphors Tangible interfaces are extremely intuitive to use because
appropriate to the medium physical object manipulations are mapped one-to-one to
4. Development of formal theoretical models for virtual object operations, and they follow a space-
predicting and modeling user interactions multiplexed input design [19]. In general input devices can
For example, the earliest immersive Virtual Reality (VR) be classified as either space- or time-multiplexed. With a
systems were used to just view virtual scenes. Then space-multiplexed interface each function has a single
interfaces such 3DM [ ] explored how elements of the
4 physical device occupying its own space. In contrast, in a
desktop WIMP metaphor could be used to enable users to time-multiplexed design a single device controls different
model immersively and support more complex interactions. functions as different points in time. The mouse in a WIMP
Next, interaction techniques such as the Go -go [5] or World interface is a good example of a time-multiplexed device.
in Miniature [6] were developed which are unique to Virtual Space-multiplexed devices are faster to use than time-
Reality. Most recently, researchers are attempting to arrive multiplexed devices because users do not have to make the
at a formal taxonomy for characterizing interaction in virtual extra step of mapping the physical device input to one of
worlds that would allow developers to build 3D virtual several logical functions. In most manual tasks space-
interfaces in a systematic manner [7]. multiplexed devices are used to interact with the
surrounding physical environment.
In many ways AR interfaces have barely moved beyond the
first stage. The earliest AR systems were used to view Although intuitive to use, with TUI interfaces information
virtual models in a variety of application domains such as display can be a challenge. It is difficult to dynamically
medicine [8] and machine maintenance [9]. These interfaces change an object’s physical properties, so most information
provided a very intuitive method for viewing three- display is confined to image projection on objects or
dimensional information, but little support for creating or augmented surfaces. In those Tangible interfaces that use
modifying the AR content. More recently, researchers have three-dimensional graphics there is also often a disconnect
begun to address this deficiency. The AR modeler of between the task space and display space. For example, in
Kiyokawa [10] uses a magnetic tracker to allow people to the Triangles work [20], physical triangles are assembled to
create AR content, while the Studierstube [11] and EMMIE tell stories, but the visual representations of the stories are
[12] projects use tracked pens and tablets for selecting and shown on a separate monitor distinct from the physical
modifying AR objects. More traditional input devices, such interface. Presentation and manipulation of 3D virtual
as a hand-held mouse or tablet [13][14], as well as intelligent objects on projection surfaces is difficult [21], particularly
agents [15] have also been investigated. However these when trying to support multiple users each with
attempts have largely been based on existing 2D and 3D independent viewpoints. Most importantly, because the
interface metaphors from desktop or immersive virtual information display is limited to a projection surface, users
environments. This means that in order to interact with are not able to pick virtual images off the surface and
virtual content the user uses a special purpose input device manipulate them in 3D space as they would a real object.
and the number of physical input devices are limited.
So we see that current Tangible interfaces provide very
In our work we are trying to develop interface metaphors intuitive manipulation of digital data, but limited support for
that enable AR applications to move beyond being primarily viewing 3D virtual objects. In contrast current AR interfaces
3D information browsers. In Augmented Reality there is an provide an excellent interface for viewing virtual mo dels, but
intimate relationship between 3D virtual models and limited support for interaction and space-multiplexed input
physical objects these models are attached to. This devices. We believe that a promising new AR interface
suggests that one promising research direction may arise metaphor can arise from combining the enhanced display
from taking advantage of the immediacy and familiarity of possibilities of Augmented Reality with the intuitive
3. manipulation of Tangible User Interfaces. We call this 4 TANGIBLE AR INTERFACES
comb ination Tangible Augmented Reality. In the next In order to explore the Tangible AR design space we have
section we show how Tangible AR supports seamless developed the following prototype interfaces:
interaction, and provide some design guidelines. Space-Multiplexed Interfaces
Shared Space: A collaborative game
3 TANGIBLE AUGMENTED REALITY ARgroove: A music performance interface
The goal of computer interfaces is to facilitate seamless Tiles: A virtual prototyping application
interaction between a user and their computer-supported
task. In this context, Ishii defines a seam as a discontinuity Time-Multiplexed Interfaces
or constraint in interaction that forces the user to shift VOMAR: A scene assembly application
among a variety of spaces or modes of operation [22]. In this section we briefly describe these interfaces, showing
Seams that force a user to move between interaction spaces how the Tangible AR design principles have been applied.
are called functional seams, while those that force the user Although each of this interfaces has been developed for a
to learn new modes of operation are cognitive seams. particular application the interaction principles that we have
In the previous section we described how Tangible User explored in them are generic and can be broadly applied.
Interfaces provide seamless interaction with objects, but SHARED SPACE Space-Multiplexed Interaction in AR
may introduce a discontinuity or functional seam between The Shared Space interface is an example of how Tangible
the interaction space and display space. In contrast most AR principles can be used to design simple yet effective
AR interfaces overlay graphics on the real world interaction multiple user space-multiplexed AR interfaces [23]. The
space and so provide a spatially seamless display. However Shared Space was designed to create a collaborative AR
they often force the user to learn different techniques for game that could be used with no training. Several people
manipulating virtual content than from normal physical stand around a table wearing Olympus HMDs with cameras
object manipulation or use a different set of tools for attached (figure 2). The video from the camera is feedback
interacting with real and virtual objects. So AR interfaces into the HMD to give them a video-mediated view of the
may introduce a cognitive seam. world with 3D graphics overlaid. On the table are cards and
A Tangible AR interface provides true spatial registration when these are turned over, in their HMDs the users see
and presentation of 3D virtual objects anywhere in the different 3D virtual objects appearing on top of them. The
physical environment, while at the same time allowing users users are free to pick up the cards and look at the models
to interact with this virtual content using the same from any viewpoint.
techniques as they would with a real physical object. So an
ideal Tangible AR interface facilitates seamless display and
interaction, removing the functional and cognitive seams
found in traditional AR and Tangible User Interfaces. This
is achieved by using the design principles learned from TUI
interfaces, including:
• The use of physical controllers for
manipulating virtual content.
• Support for spatial 3D interaction techniques
(such as using object proximity).
• Support for both time-multiplexed and space-
multiplexed interaction.
• Support for multi-handed interaction. Figure 2: Shared Space: the insert shows user view
• Support for Matching the physical
constraints of the object to the requirements The goal of the game is to collaboratively match objects
of the interaction task. that logically belonged together. When cards containing
• The ability to support parallel activity where correct matches are placed side by side an animation is
multiple objects are being manipulated. triggered (figure 3). For example, when the card with the
• Collaboration between multiple participants UFO on it is placed next to the alien, the alien appears to
jump into the UFO and start to fly around the Earth.
Our central hypothesis is that AR interfaces that follow
these design principles will provide completely seamless
interaction with virtual content and so will be extremely
intuitive to use. In the next section we describe some
prototype interfaces that support this hypothesis.
4. When users were asked to comment on what they liked
most about the exhibit, interactivity, how fun it was, and
ease of user were the most common responses. Perhaps
more interestingly, when asked what could be improved,
people thought that reducing the tracking latency,
improving image quality and improving HMD quality were
most important.
ARGROOVE: 3D AR Interface for Interactive Music
ARGroove is a tangible AR musical interface that allows
Fig 3: The Spatial Interaction Based on Proximity people to control interactive electronic musical
compositions individually or in a group. Unlike the previous
Although is a very simp le application it provides a good
interface ARGroove uses more complex 3D physical motion
test of the usefulness of the tangible interface metaphor for
to control a non-visual modality, and it uses 3D widgets for
manipulating virtual models. Shared Space is essentially
AR displays.
virtual model viewing software that support six degree of
freedom viewing with a simple proximity based interaction. In the ARGroove music is constructed from a collection of
However, rather than using a mouse or magnetic tracker to short looped elements, each carefully composed to fit
manipulate the model, users just hold cards. The form of the others so they can be interactively mixed. For each
cards encourage people to manipulate them the same way individual loop a composer assigns filters and effects, to
they would use normal playing cards, such as turning them allow the user to interactively modulate them.
over, rotating them, holding them in the hands, passing The interface consists of a number of real LP records with
them to each other and placing them next to each other. markers placed on a table in front of a projection screen. An
They can hold them in either hand and many cards can be overhead camera captures the image of the table and shows
uncovered at once to show a number of different objects. it on the screen with additional 3D widgets overlaid on each
Shared Space has been shown at Siggraph 99 and Imagina card (figure 7). The user can play, remix and modulate
2000 and the thousands of people that have tried it have musical elements by picking up and manipulating the real
had no difficulty with the interface. Users did not need to records in space which serve both as physical musical
learn any complicated computer interface or command set containers, grouping together related elements of musical
and they found it natural to pick up and manipulate the composition, and tangible AR 3D controllers, that allow
physical cards to view the virtual objects from every angle. users to interactively modulate music, mix and fade between
Players would often spontaneously collaborate with music pieces. To start playing a musical element, the user
strangers who had the matching card they needed. They simply flips over the associated record so that the overhead
would pass cards between each other, and collaboratively camera can identify it and start playing corresponding
view objects and completed animations. By combining a musical sequence. The user then modifies the sound by
tangible object with virtual image we found that even young translating the record up and down, or rotating and tilting it
children could play and enjoy the game. (Figure 8), these 3D motions are mapped into corresponding
modulation functions, e.g. pitch, distortion, amplitude, filter
After the Imagina 2000 experience 157 people filled out a cut-off frequency, and delay mix. Since the system tracks
short user survey, answering the following questions on a and recognizes several records at the same time, users can
scale of one to seven (1=not very easily, 7=very easily): play several musical elements simultaneously and
Q1: How easily could you play with other people? collaboratively.
Q2: How easily could you interact with the virtual objects?
Table 2 summarizes the results. Users felt that they could
very easily play with the other people (5.64) and interact
with the virtual objects (5.62). Both of these are significantly
higher than the neutral value of 3.5; the t-test value row
showing the results from a one-tailed t-test.
Q1 (n=132) Q2 (n=157)
Average 5.64 5.62
Std Dev. 1.19 1.20
t-test val. 237.09, p<0.01 278.74, p<0.01
Table 2: Shared Space Survey Results Figure 7: Two users are playing music in the Augmented
Groove: the insert shows the view on the projector screen
5. A three-dimensional virtual controller is overlaid on top of across tangible AR widgets (that we called tiles) letting the
each of the records providing the user with instant visual user to choose operation simply by picking a needed tile.
feedback on the state and progression of musical The application domain is rapid prototyping for aircraft
performance. For each control dimension a corresponding instrument panels. The interface consists of a metal
graphical element changes depending on the value of the whiteboard, a book, and two stacks of magnetic tiles
control (figure 8). For example, as the user raises the record (approximately 15cm x 15cm). Sitting in front of the
a pyramid in the middle also goes up and when the musical whiteboard the user wears a lightweight high resolution
control reaches limit a small, animated character pops up Sony Glasstron HMD with a video camera attached (fig. 4).
cuing the user. Although virtual controls are not absolutely
necessary to control music, they are essential to make the
system easier and more enjoyable to use.
Figure 4: Using the Tiles Interface
The various tangible elements of the interface serve a
different purpose. The whiteboard is the working space
where users can layout virtual aircraft instruments. The
book serves as a menu object, and when the user looks
through its pages they will see a different virtual instrument
model on each page. One stack of tiles serve as data tiles
and shows no virtual content until virtual objects are copied
onto them. The remaining tiles are operator tiles and are
Figure 8: Gestural musical interface in Augmented Groove used to perform basic operations on the data tiles. There is
a unique tile for each operation and currently supported
ARGroove was demonstrated at SIGGRAPH 2000 where
operations include deletion, copying and a help function.
users could perform an electronic composition using three
Each of the operations tiles has a different three-
records. Over 1500 people tried the experience and the
dimensional virtual icon on them to show what their
majority found it very enjoyable. One of the reasons what
function is and tell them apart from the data tiles (fig 5).
was that the range of possible musical variations afforded
was very large and the ability to simultaneously control
multiple sound effects and filters resulted in a very
expressive and enjoyable interface. Only a short explanation
was usually sufficient for them to be able to effectively
control the musical performance. We interviewed 25 users
and of these 92% rated the experience as “Excellent”. In
terms of ease of learning 40% rated it as “Excellent”, while
52% said it was “Okay” and only 8% as “Poor”.
TILES: Putting Everything Together Trashcan delete widget Talking head help widget
Tiles is an AR authoring interface that explores how more
Figure 5: Virtual Widgets on Operator Tiles
complicated behaviors can be supported, including
copying, pasting, deleting, and browsing virtual information As with the Shared Space interface, virtual images appear
in AR settings. In Shared Space all of the cards possessed attached to the physical objects and can be picked up and
the same functionality, whereas in Tiles we further explore looked at from any viewpoint. Interaction between objects
space-multiplexed control by assigning different behaviors is also based on physical proximity, however the operation
to different objects, creating tangible 3D widgets that we that is invoked by bringing objects next to each other
first explored in ARGroove. We distribute functionality depends on their semantic. For example, to copy a virtual
instrument from the menu book to an empty data tile, the tile
6. is just placed by the appropriate book page. However, The application is layout of virtual furniture in a room,
touching a data tile that contains a virtual instrument with although the same interface can be applied to many
the trashcan delete tile, removes the virtual n strument,
i domains. When the user opens the book on each of its
while putting the help tile beside it displays a help message pages they see a different set of virtual furniture, such as a
(fig 5). Once virtual instruments have been placed on the set of chairs, rugs etc (fig 10 (b)). The 3D virtual models
data tiles, these can be attached to the whiteboard to layout appear exactly superimposed over the real book pages.
a prototype virtual instrument panel (figure 6). Looking at the large piece of paper they see an empty
virtual room. They can then copy and transfer objects from
the book to the virtual room using the paddle (fig 10 (c,d)).
The paddle is the main interaction device and it is a simple
object with an attached tracking symbol. It is designed to be
used by either hand and allows the user to make static and
dynamic gestures to interact with the virtual objects:
Static Dynamic
1. Paddle proximity to 1. Shaking (side to side
object motion of paddle)
2. Paddle tilt/inclination (e.g. 2. Hitting (up and down
Figure 6: Virtual Instrument Panel view by the user fig 10 (d)) motion of paddle)
3. Pushing object (fig 10 (e))
The main difference between this interface and the previous
interfaces is the use of different shaped physical objects for
different interface properties, and assigning different
semantics to different objects. Supporting one interface
function per object is similar to the interface models of
desktop GUI interfaces, where each icon and tool has
unique functionality. Despite this added functionality, the
physical interactions are still based on object manipulation
and proximity, showing that quite complex AR interfaces
can be built from simple physical interactions. The use of
(a) The VOMAR interface (b) A virtual furniture menu
different objects for different functions further emphasizes
the space-multiplexed nature of the interface.
We have not evaluated the Tiles interface in a formal
manner, however we have demonstrated it at the ISAR 2000
conference were over 70 people tried it. Once again these
users had no difficult with the interface, although the wider
range of interface functions typically meant that they
needed to be shown how the different operator tiles worked
before they tried it themselves.
(c) Picking virtual furniture (d) Placing object in a room by
VOMAR – Time-Multiplexed Interaction in AR object with a paddle sliding it from the paddle
The VOMAR project explores how a time-multiplexed
Tangible AR interface could be designed. VOMAR uses a
single input device that allows the user to perform multiple
different tasks in a virtual scene assembly application. To
achieve this we explored how complex physical gestures
can be used to support natural and effective interaction.
The physical components of the interface comprise a real
book, a cardboard paddle the user holds in their hand, a
large piece of paper and a lightweight HMD the user wears
(e) Moving virtual objects (f) Constructed scene
(figure 10 (a)). As before the form of each of these objects
reflects their function; the book serves as a container Figure 10: The VOMAR interface
holding all the virtual models, the paddle is the main To copy an object from the object book onto the paddle the
interaction device, and the large piece of paper the user simple places the paddle beside the desired object and
workspace. the close proximity is detected and the object copied onto
7. the paddle (fig 10 (c)). Once a model is on the paddle it can the dynamic changes in their configuration, and how this
be picked up and viewed from any viewpoint. To drop a can be achieved are interesting future research directions.
model into the virtual scene the paddle is placed at the Another advantage is the use of physical form-factor to
desired location and tilted until the model slides off (figure support interface functions. In our interfaces the physical
10 (d)). Models in the scene can be pushed around by design of the tangible interfaces elements provide
pushing motions of the paddle (fig. 10 (e)). A shaking affordances that suggest how they are to be used. So for
motion is used to delete an object from the paddle, while example, in the Shared Space interface users discover the
models in the virtual room can be removed by hitting them. proximity based interactions because placing cards together
As can be seen these interactions are very natural to is a natural behaviour and suggested by the shape of the
perform with a real paddle, so in a matter of a few moments a cards. Naturally the physical form factor and the computer
user can assemble a fairly complex arrangement of virtual graphics design of the virtual images attached to the
furniture (figure 10 ( Of course what the user is really
f)). interfaces is important and should correspond to each other.
doing is interacting with a simple CAD program, but instead
Finally, Tangible AR interfaces naturally support face-to-
of using a mouse or keyboard they are just manipulating a
face collaboration as shown by the Shared Space and
cardboard paddle in very intuitive ways.
ARgroove interfaces. People commonly use the resources
5 DISCUSSION of the physical world to establish a socially shared meaning
Tangible AR interfaces couple AR displays with tangible [24]. Physical objects support collaboration both by their
user interface controllers and 3D spatial interaction to appearance, the physical affordances they have, their use
design a wide variety of powerful AR interfaces. From our as semantic representations, their spatial relationships, and
prototype interfaces we have found that there are several their ability to help focus attention. In a Tangible AR
advantages of Tangible AR interfaces. First, Tangible AR interface the physical objects can further be enhanced in
interfaces are transparent interfaces that provide for ways not normally possible such as providing dynamic
seamless two-handed 3D interaction with both virtual and information overlay, private and public data display, context
physical objects. They do not require participants to use or sensitive visual cues, and physically based interactions.
wear any special purpose input devices and tools , such as One of the reasons why our interfaces are successful is that
magnetic 3D trackers, to interact with virtual objects. the virtual models appear to be attached to the physical
Instead users can manipulate virtual objects using the same objects they are associated with. In the next section we
input devices they use in physical world – their own hands describe the tracking technology that makes this possible.
- leading to seamless interaction with digital and physical
6 TRACKING TECHNOLOGY
worlds. This property also allows the user to easily use
both digital and conventional tools in the same working Precise registration of real and virtual objects anywhere in
space. space is a significant research problem in the AR field.
Tangible AR allows seamless spatial interaction with Azuma provides a good review of the issues faced in AR
virtual objects anywhere in their physical workspace. The tracking and registration [25] and there are a number of
user is not confined to a certain workspace but can pick up possible tracking approaches that could be used in
and manipulate virtual data anywhere just as real objects, as developing Tangible AR interfaces. We have developed a
well as arrange them on any working surface, such as a computer-vision based method in which a virtual model can
table or whiteboard. The digital and physical workspaces be fixed in space relative to one or more tracking markers
are therefore continuous, naturally blending together. [26]. These markers are simple black squares with a unique
pattern inside them. Our approach is summarized in figure
Tangible AR interfaces can allow the design of a simple yet
13. After thresholding an input image, square markers are
effective and consistent AR interface model, providing a set
extracted and identified. Then pose and position of markers
of basic tools and operations that allow users, for example,
are estimated from coordinates of the 4 vertices. Finally
to add, remove, copy, duplicate and annotate virtual objects
virtual images are drawn on the input image. A more
in AR environments.
complete explanation is given in Appendix A.
An interesting property of Tangible AR interfaces is their One of the advantages of this method is that in a video-see
ad-hoc, highly re-configurable nature. Unlike traditional through AR system, the video frame that is shown in the
GUI and 3D VR interfaces, Tangible AR interfaces are in users display is the same frame that is used to track their
some sense designed by user as they are carrying on with viewpoint and so the virtual models can appear exactly
their work. In these interfaces the users are free to put overlaid on a real object. These markers are also simple and
interface elements anywhere they want: tables, whiteboards, cheap and can be attached to any flat surface. The tracking
in boxes and folders, arrange them in stacks or group them works in real time (30 fps on a 766 Mhz Pentium III) and is
together. How the interface components should be robust provided the square marker is in view.
designed for such environments, if they should be aware of
8. Figure 13: Vision based AR Tracking Process
Our system tracks six-degree of freedom manipulation of the 2. Ishii, H., Ullmer, B. Tangible Bits: Towards Seamless
physical marker, allows a user to use either hand to Interfaces between People, Bits and Atoms. In
manipulate an object and can track as many objects are Proceedings of CHI 97, Atlanta, Georgia, USA, ACM
there are markers in the field of view. Press, 1997, pp. 234-241.
However there are some limitations. In our system, the 3. Tangible Media site: http://www.media.mit.edu/tangible/
camera position and orientation is found in the marker’s 4. Butterworth, J., A. Davidson, et al. (1992). 3DM: a three
local coordinate frame. So if there are several markers in dimensional modeler using a head-mounted display.
view, multiple camera transformations will be found, one for Symposium on Interactive 3D graphics, ACM.
each marker. This use of local coordinate frames prevents
measurement of some physical interactions; the distance 5. Poupyrev, I., Billinghurst, M., Weghorst, S., Ichikawa,
between two objects can be found by considering the T., The Go -Go Interaction Technique. In Proc. of
camera to marker distances of the same camera in the two UIST'96. 1996. ACM. pp. 79-80.
local coordinate frames, but to measure the tilt of an object 6. Stoakley, R., Conway, M., Pausch R. Virtual Reality on a
its orientation must be know relative to some global WIM: Interactive Worlds in Miniature. In Proceedings
coordinate frame. We overcome this problem by using sets of CHI 95, 1995, ACM Press.
of markers to define a global coordinate frame. For example, 7. Gabbard, J. L. A taxonomy of usability characteristics in
the workspace in the VOMAR interface is defined by the set virtual environments. M.S. Thesis, Virginia Polytechnic
of six square markers on the large paper mat. Institute and State University. 1997. Document available
6 CONCLUSIONS online at http://www.vpst.org/jgabbard/ve/framework/.
Tangible Augmented Reality is a new approach to 8. Bajura, M., H. Fuchs, et al. (1992). Merging Virtual
designing AR interfaces that emphasizes physical object Objects with the Real World: Seeing Ultrasound Imagery
form and interactions. Using design principles adapted from Within the Patient. SIGGRAPH ’92, ACM.
Tangible User Interfaces we can develop AR interfaces that
9. Feiner, S., B. MacIntyre, et al. (1993). “Knowledge-Based
support seamless interaction and are very intuitive to use.
Augmented Reality.” Communications of the ACM
We believe that exploration with Tangible AR interfaces are
36(7): 53-62.
a first step towards developing new physically-based
interface metaphors that are unique to Augmented Reality. 10. Kiyokawa, K., Takemura, H., Yokoya, N. "A
Collaboration Supporting Technique by Integrating a
In the future we plan to conduct more rigorous user studies
Shared Virtual Reality and a Shared Augmented Reality",
to quantify the benefits of Tangible AR interface, as well as
Proceedings of the IEEE International Conference on
developing a wider range of interaction techniques such as
Systems, Man and Cybernetics (SMC '99), Vol.VI,
free hand interaction and two-handed gesture input.
pp.48-53, Tokyo, 1999.
REFERENCES 11. Schmalstieg, D., A. Fuhrmann, et al. (2000). Bridging
1. Sutherland, I. The Ultimate Display. International multiple user interface dimensions with augmented
Federation of Information Processing, Vol. 2, 1965, pp. reality systems. ISAR'2000, IEEE.
506-508. 12. Butz, A., Hollerer, T., et.al. Enveloping Users and
Computers in a Collaborative 3D Augmented Reality. In
9. Proceedings of IWAR ’99, October 20-21, San Francisco, 26. ARToolKit tracking software available for download at:
CA, 1999, pp. 35-44. http://www.hitl.washington.edu/research/shared_space/
13. Rekimoto, J., Y. Ayatsuka, et al. (1998). Augment-able download/
reality: Situated communication through physical and Appendix A: A Tangible AR Tracking Algorithm
digital spaces. ISWC'98, IEEE. Figure 14 shows coordinates frames that are used in
14. Hollerer, T., S. Feiner, et al. (1999). “Exploring MARS: tracking process. A size-known square marker is used as a
developing indoor and outdoor user interfaces to a base of the marker coordinates in which virtual objects are
mobile augmented reality system.” Computers & represented. The goal is to find the transformation matrix
Graphics 23: 779-785. from the marker coordinates to the camera coordinates (Tcm),
represented in eq.1. This matrix consists of the rotation R3x3
15. Anabuki, M., H. Kakuta, et al. (2000). Welbo: An
and the translation T3x1. These values are found by
Embodied Conversational Agent Living in Mixed Reality
detecting the markers in the camera image plane and using
Spaces. CHI'2000, Extended Abstracts, ACM.
the perspective transformation matrix.
16. Wellner, P. Interactions with Paper on the DigitalDesk.
Communications of the ACM, Vol. 36, no. 7, July 1993,
pp. 87-96.
17. Weiser, M. The Computer for the Twenty-First Century.
Scientific American, 1991, 265 (3), pp. 94-104.
18. Anderson, D., Frankel, J., et. al. Tangible Interaction +
Graphical Interpretation: A New Approach to 3D
Modeling. In Proceedings of SIGGRAPH 2000, August
2000, New Orleans, ACM Press, pp. 393-402.
19. Fitzmaurice, G. and Buxton, W. (1997). An Empirical
Evaluation of Graspable User Interfaces: towards
specialized, space-multiplexed input. Proceedings of the
ACM Conference on Human Factors in Computing Figure 14: Coordinates System for Tracking
Systems (CHI'97). pp. 43-50. New York: ACM.
20. Gorbet, M., Orth, M., Ishii, H. Triangles: Tangible
Interface for Manipulation and Exploration of Digital
Information Topography. In Proceedings of CHI 98, Los
Angeles, CA, 1998.
21. Fjeld, M., Voorhorst, F., Bichsel, M., Lauche, K., Rauter-
berg, M., H., K. (1999). Exploring Brick-Based Navigation
and Composition in an Augmented Reality. In
Proceedings of HUC’ 99, pp. 102-116.
22. Ishii, H., Kobayashi, M., Arita, K. (1994). Iterative (eq.1)
design of seamless collaborative media. CACM 37(8),
83-97.
1) Perspective Transformation Matrix
23. Billinghurst, M., Poupyrev, I., Kato, H., May, R. (2000)
The camera coordinate frame is transformed to the ideal
Mixing Realities in Shared Space: An Augmented
screen coordinates by the perspective matrix P (eq.2). This
Reality Interface for Collaborative Computing. In
perspective transformation matrix is found from an initial
Proceedings of the IEEE International Conference on
off-line calibration process.
Multimedia and Expo (ICME2000), July 30th - August 2,
New York.
24. Gav, G., Lentini, M. Use of Communication Resources in
a Networked Collaborative Design Environment.
http://www.osu.edu/units/jcmc/IMG_JCMC/ResourceUs
e.html , (eq.2)
25. Azuma, R. "A Survey of Augmented Reality", Presence, 2) Marker Detection
6, 4, pp.355-385, 1997
Step 1: Low-level Image Processing
10. Input images are thresholded by the constant value. Then vertices coordinates of the marker in the marker coordinate
all regions in the image are labeled. frame and those coordinates in the ideal screen coordinate
Step 2: Marker Extraction frame, eight equations including translation component Tx
Ty Tz are generated and the value of these translation
The regions whose outline contour can be fitted by four line component Tx Ty Tz can be obtained from these equations.
segments are extracted. Parameters of these four line
segments and coordinates of the four vertices of the Step 3: Modification of R3x3 and T3x1
regions found from the intersections of the line segments The transformation matrix found from the method mentioned
are calculated in the ideal screen coordinates and stored. above may include error. However this can be reduced
3) Pose and Position Estimation through the following process. The vertex coordinates of
the markers in the marker coordinate frame can be
Step 1: Estimation of R3x3 transformed to coordinates in the ideal screen coordinate
When two parallel sides of a marker are projected on the frame by using the transformation matrix obtained. Then the
image screen, the equations of those line segments in the transformation matrix is optimized as sum of the difference
ideal screen coordinates are the following: between these transformed coordinates and the coordinates
(eq. 3) measured from the image goes to a minimum. Though there
are six independent variables in the transformation matrix,
only the rotation components are optimized and then the
For each of markers, the value of these parameters has been translation components are re-estimated by using the
already obtained. The equations of the planes that include method in step 2. By iteration of this process a number of
these two sides respectively can be represented as (eq.4) in times the transformation matrix is more accurately found. It
the camera coordinates frame by substituting x and yc in
c would be possible to deal with all of six independent
(eq.2) for x and y in (eq.3). variables in the optimization process. However,
computational cost has to be considered.
(eq.4) Evaluation of Tracking Accuracy
Given that normal vectors of these planes are n1 and n2 In order to evaluate accuracy of the marker detection, the
respectively, the direction vector of the parallel two sides of detected position and pose were recorded while a square
the marker is given by the outer product. Given that two marker 80[mm] wide was moved perpendicular to the camera
unit direction vectors that are obtained from two sets of two and tilted at different angle. Figure 16 shows errors of
parallel sides of the marker is u1 and u2, these vectors position. Accuracy decreases the further the cards are from
should be perpendicular. However, image processing errors the camera and the further they are tilted from the camera.
mean that the vectors won't be exactly perpendicular. To
compensate for this two perpendicular unit direction
vectors are defined by v1 and v2 in the plane that includes u1
and u2 as shown in figure 15. Given that the unit direction
vector which is perpendicular to both v1 and v2 is v3, the
rotation component R3x3 in the transformation matrix Tcm
from marker coordinates to camera coordinates specified in
eq.1 is [V1t V2t V3t].
Figure 16: Errors of position
Figure 15: Two perpendicular unit direction vectors
Step 2: Estimation of T3x1
Since the rotation component R3x3 in the transformation
matrix Tcm was given, by using (eq.1), (eq.2), the four