The document discusses interaction design for augmented reality (AR) applications. It covers various AR interaction techniques including browsing interfaces, 3D interfaces, tangible interfaces using physical objects, and augmented surfaces that project virtual content onto physical surfaces. Keyboard, mouse, touch, gesture, and proximity interactions are discussed. The advantages and disadvantages of different approaches are summarized. Effective AR interaction requires considering input and output technologies to create an intuitive user experience.
Hands and Speech in Space: Multimodal Input for Augmented Reality Mark Billinghurst
A keynote talk given by Mark Billinghurst at the ICMI 2013 conference, December 12th 2013. The talk is about how to use speech and gesture interaction with Augmented Reality interfaces.
The final lecture in the COSC 426 graduate course in Augmented Reality. Taught by Mark Billinghurst from the HIT Lab NZ at the University of Canterbury on Sept. 19th 2012
This is the COSC 426 Lecture 4 on Designing AR Interfaces. Taught by Mark Billinghurst from the HIT Lab NZ at the University of Canterbury. This is part of his graduate course on Augmented Reality. Taught on August 2nd 2013
User Interfaces and User Centered Design Techniques for Augmented Reality and...Stuart Murphy
We chose to explore virtual and augmented reality (VR and AR) due to its recent emergence into the mainstream areas of gaming, mobile applications and various other systems. We felt it important to distinguish between VR and AR in both areas of interaction design and user interface evaluation and creation techniques. As it is a topic of great passion for us we wanted to instill the possibilities that this medium has to offer for interaction designers and UI developers.
COMP 4010 lecture on AR Interaction Design. Lecture given by Gun Lee at the University of South Australia on October 12th 2017, from slides prepared by Mark Billinghurst
Hands and Speech in Space: Multimodal Input for Augmented Reality Mark Billinghurst
A keynote talk given by Mark Billinghurst at the ICMI 2013 conference, December 12th 2013. The talk is about how to use speech and gesture interaction with Augmented Reality interfaces.
The final lecture in the COSC 426 graduate course in Augmented Reality. Taught by Mark Billinghurst from the HIT Lab NZ at the University of Canterbury on Sept. 19th 2012
This is the COSC 426 Lecture 4 on Designing AR Interfaces. Taught by Mark Billinghurst from the HIT Lab NZ at the University of Canterbury. This is part of his graduate course on Augmented Reality. Taught on August 2nd 2013
User Interfaces and User Centered Design Techniques for Augmented Reality and...Stuart Murphy
We chose to explore virtual and augmented reality (VR and AR) due to its recent emergence into the mainstream areas of gaming, mobile applications and various other systems. We felt it important to distinguish between VR and AR in both areas of interaction design and user interface evaluation and creation techniques. As it is a topic of great passion for us we wanted to instill the possibilities that this medium has to offer for interaction designers and UI developers.
COMP 4010 lecture on AR Interaction Design. Lecture given by Gun Lee at the University of South Australia on October 12th 2017, from slides prepared by Mark Billinghurst
Mika Saastamoinen: Spatial computing - extending reality. Presentation at Kela Conference on Social Security 2019 – Equality and wellbeing through sustainable social security system, 10.12.2019.
Lecture 9 of the COMP 4010 course in AR/VR from the University of South Australia. This was taught by Mark Billinghurst on October 5th, 2021. This lecture describes VR input devices, VR systems and rapid prototyping tools.
2013 Lecture 6: AR User Interface Design GuidelinesMark Billinghurst
COSC 426 Lecture 6: on AR User Interface Design Guidelines. Lecture taught by Mark Billinghurst from the HIT Lab NZ at the University of Canterbury on August 16th 2013
COMP 4010 - Lecture 1: Introduction to Virtual RealityMark Billinghurst
Lecture 1 of the VR/AR class taught by Mark Billinghurst and Bruce Thomas at the University of South Australia. This lecture provides an introduction to VR and was taught on July 26th 2016.
Talk given by Mark Billinghurst at the DIGI_X conference in Auckland, New Zealand on June 21st 2018. The talk was about how Mixed Reality can be applied in the work place.
Virtual Reality refers to a high-end user interface that involves real-time simulation and interactions through multiple sensorial channels. Virtual Reality is often used to describe a wide variety of applications, commonly associated with its immersive, highly visual, 3D environments. The development of CAD software, graphics hardware acceleration, head mounted displays, database gloves and miniaturization have helped popularize the concept. Jaron Lanier coined the term Virtual Reality in 1987. Today Virtual Reality plays a big part in the everyday lives of the world’s population.
keynote speech by Mark Billinghurst at the Workshop on Transitional Interfaces in Mixed and Cross-Reality, at the ACM ISS 2021 Conference. Given on November 14th 2021
COMP 4010 Lecture7 3D User Interfaces for Virtual RealityMark Billinghurst
Lecture 7 of the COMP 4010 course in Virtural Reality. This lecture was about 3D User Interfaces for Virtual Reality. The lecture was taught by Mark Billinghurst on September 13th 2016 at the University of South Australia.
Lecture 1 of the COMP 4010 course on Augmented and Virtual Reality. Taught by Mark Billinghurst, Bruce Thomas and Gun Lee from the University of South Australia. This lecture provides an introduction to Virtual Reality. Taught on July 24th 2018.
Abstract
- We introduce WeARHand, which allows a user to manipulate virtual 3D objects with a bare hand in a wearable augmented reality (AR) environment. Our method uses no environmentally tethered tracking devices and localizes a pair of near-range and far-range RGB-D cameras mounted on a head-worn display and a moving bare hand in 3D space by exploiting depth input data. Depth perception is enhanced through egocentric visual feed-back, including a semi-transparent proxy hand. We implement a virtual hand interaction technique and feedback approaches, and evaluate their performance and usability. The proposed method can apply to many 3D interaction scenarios using hands in a wearable AR environment, such as AR information brows-ing, maintenance, design, and games.
Reference
- T. Ha, S. Feiner, and W. Woo, “WeARHand: Head-Worn, RGB-D Camera-Based, Bare-Hand User Interface with Visually Enhanced Depth Perception,” ISMAR (S&T), pp. 219-228, 2014.
Presentation on trends and future research directions in Augmented Reality. Given by Mark Billinghurst at the Smart Cloud 2015 conference on September 16th, 2015, in Seoul, Korea.
Some of my recent research topics at the Meta-Perception group at the Ishikawa-Watanabe laboratory (http://www.k2.t.u-tokyo.ac.jp/index-e.html)
- The Physical Cloud
- Zero-delay, Zero-mismatch spatial AR with Laser Sensing Display
- Augmented Perception
(Link to videos in the comments)
Touching More than 3 Dimensions Research Into Novel Interfaces – three dimen...Joaquim Jorge
Work on interactive tabletops and surfaces has focused mostly on two-dimensional issues, such as multi-finger gestures and tangible interaction. Interesting as it is, however this picture is missing several dimensions. I will describe work on 2D and 3D semi-immersive environments and present novel on-and-above-the-surface techniques based on bi-manual models to take advantage of the continuous interaction space for creating and editing 3D models in stereoscopic environments. I will also discuss means to allow for more expressive interactions, including novel uses of sound and combining hand and finger tracking in the space above the table with multitouch gestures on its surface continuously. These combinations can provide alternative design environments and allow novel interaction modalities.
Lecture on Collaborative Augmented Reality given to the COSC 426 graduate class in AR. Taught by Mark Billinghurst from the HIT Lab NZ at the University of Canterbury.
Mika Saastamoinen: Spatial computing - extending reality. Presentation at Kela Conference on Social Security 2019 – Equality and wellbeing through sustainable social security system, 10.12.2019.
Lecture 9 of the COMP 4010 course in AR/VR from the University of South Australia. This was taught by Mark Billinghurst on October 5th, 2021. This lecture describes VR input devices, VR systems and rapid prototyping tools.
2013 Lecture 6: AR User Interface Design GuidelinesMark Billinghurst
COSC 426 Lecture 6: on AR User Interface Design Guidelines. Lecture taught by Mark Billinghurst from the HIT Lab NZ at the University of Canterbury on August 16th 2013
COMP 4010 - Lecture 1: Introduction to Virtual RealityMark Billinghurst
Lecture 1 of the VR/AR class taught by Mark Billinghurst and Bruce Thomas at the University of South Australia. This lecture provides an introduction to VR and was taught on July 26th 2016.
Talk given by Mark Billinghurst at the DIGI_X conference in Auckland, New Zealand on June 21st 2018. The talk was about how Mixed Reality can be applied in the work place.
Virtual Reality refers to a high-end user interface that involves real-time simulation and interactions through multiple sensorial channels. Virtual Reality is often used to describe a wide variety of applications, commonly associated with its immersive, highly visual, 3D environments. The development of CAD software, graphics hardware acceleration, head mounted displays, database gloves and miniaturization have helped popularize the concept. Jaron Lanier coined the term Virtual Reality in 1987. Today Virtual Reality plays a big part in the everyday lives of the world’s population.
keynote speech by Mark Billinghurst at the Workshop on Transitional Interfaces in Mixed and Cross-Reality, at the ACM ISS 2021 Conference. Given on November 14th 2021
COMP 4010 Lecture7 3D User Interfaces for Virtual RealityMark Billinghurst
Lecture 7 of the COMP 4010 course in Virtural Reality. This lecture was about 3D User Interfaces for Virtual Reality. The lecture was taught by Mark Billinghurst on September 13th 2016 at the University of South Australia.
Lecture 1 of the COMP 4010 course on Augmented and Virtual Reality. Taught by Mark Billinghurst, Bruce Thomas and Gun Lee from the University of South Australia. This lecture provides an introduction to Virtual Reality. Taught on July 24th 2018.
Abstract
- We introduce WeARHand, which allows a user to manipulate virtual 3D objects with a bare hand in a wearable augmented reality (AR) environment. Our method uses no environmentally tethered tracking devices and localizes a pair of near-range and far-range RGB-D cameras mounted on a head-worn display and a moving bare hand in 3D space by exploiting depth input data. Depth perception is enhanced through egocentric visual feed-back, including a semi-transparent proxy hand. We implement a virtual hand interaction technique and feedback approaches, and evaluate their performance and usability. The proposed method can apply to many 3D interaction scenarios using hands in a wearable AR environment, such as AR information brows-ing, maintenance, design, and games.
Reference
- T. Ha, S. Feiner, and W. Woo, “WeARHand: Head-Worn, RGB-D Camera-Based, Bare-Hand User Interface with Visually Enhanced Depth Perception,” ISMAR (S&T), pp. 219-228, 2014.
Presentation on trends and future research directions in Augmented Reality. Given by Mark Billinghurst at the Smart Cloud 2015 conference on September 16th, 2015, in Seoul, Korea.
Some of my recent research topics at the Meta-Perception group at the Ishikawa-Watanabe laboratory (http://www.k2.t.u-tokyo.ac.jp/index-e.html)
- The Physical Cloud
- Zero-delay, Zero-mismatch spatial AR with Laser Sensing Display
- Augmented Perception
(Link to videos in the comments)
Touching More than 3 Dimensions Research Into Novel Interfaces – three dimen...Joaquim Jorge
Work on interactive tabletops and surfaces has focused mostly on two-dimensional issues, such as multi-finger gestures and tangible interaction. Interesting as it is, however this picture is missing several dimensions. I will describe work on 2D and 3D semi-immersive environments and present novel on-and-above-the-surface techniques based on bi-manual models to take advantage of the continuous interaction space for creating and editing 3D models in stereoscopic environments. I will also discuss means to allow for more expressive interactions, including novel uses of sound and combining hand and finger tracking in the space above the table with multitouch gestures on its surface continuously. These combinations can provide alternative design environments and allow novel interaction modalities.
Lecture on Collaborative Augmented Reality given to the COSC 426 graduate class in AR. Taught by Mark Billinghurst from the HIT Lab NZ at the University of Canterbury.
The Reality of Augmented Reality: Are we there yet?Mark Billinghurst
3DUI 2015 Keynote talk given by Mark Billinghurst on March 24th 2015, as part of the 3DUI 2015 conference. The talk is a survery of Augmented Reality and Empathic Computing
The third lecture from the Augmented Reality Summer School talk by Mark Billinghurst at the University of South Australia, February 15th - 19th, 2016. This provides an overview of AR Interaction Techniques
Natural Interaction for Augmented Reality ApplicationsMark Billinghurst
Keynote talk giving by Mark Billinghurst from the HIT Lab NZ at the IVCNZ 2013 conference, November 28th 2013. The talk focuses on Natural Interaction with Augmented Reality applications using speech and gesture and demonstrates some of the projects in this area developed by the HIT Lab NZ.
A lecture on research directions in Augmented Reality as part of the COSC 426 class on AR. Taught by Mark Billinghurst from the HIT Lab NZ at the University of Canterbury.
Keynote talk by Mark Billinghurst at the 9th XR-Metaverse conference in Busan, South Korea. The talk was given on May 20th, 2024. It talks about progress on achieving the Metaverse vision laid out in Neil Stephenson's book, Snowcrash.
These are slides from the Defence Industry event orgranized by the Australian Research Centre for Interactive and Virtual Environments (IVE). This was held on April 18th 2024, and showcased IVE research capabilities to the South Australian Defence industry.
This is a guest lecture given by Mark Billinghurst at the University of Sydney on March 27th 2024. It discusses some future research directions for Augmented Reality.
Presentation given by Mark Billinghurst at the 2024 XR Spring Summer School on March 7 2024. This lecture talks about different evaluation methods that can be used for Social XR/AR/VR experiences.
Empathic Computing: Delivering the Potential of the MetaverseMark Billinghurst
Invited guest lecture by Mark Billingurust given at the MIT Media Laboratory on November 21st 2023. This was given as part of Professor Hiroshi Ishii's class on Tangible Media
Talk to Me: Using Virtual Avatars to Improve Remote CollaborationMark Billinghurst
A talk given by Mark Billinging in the CLIPE workshop in Tubingen, Germant on April 27th 2023. This talk describes how virtual avatars can be used to support remote collaboration.
Empathic Computing: Designing for the Broader MetaverseMark Billinghurst
Keynote talk given by Mark Billinghurst at the CHI 2023 Workshop on Towards and Inclusive and Accessible Metaverse. The talk was given on April 23rd 2023.
Lecture 6 of the COMP 4010 course on AR/VR. This lecture is about designing AR systems. This was taught by Mark Billinghurst at the University of South Australia on September 1st 2022.
Keynote speech given by Mark Billinghurst at the ISS 2022 conference. Presented on November 22nd, 2022. This keynote outlines some research opportunities in the Metaverse.
Lecture 5 in the 2022 COMP 4010 lecture series. This lecture is about AR prototyping tools and techniques. The lecture was given by Mark Billinghurst from University of South Australia in 2022.
Lecture 4 in the 2022 COMP 4010 lecture series on AR/VR. This lecture is about AR Interaction techniques. This was taught by Mark Billinghurst at the University of South Australia in 2022.
Lecture 3 in the 2022 COMP 4010 lecture series on AR/VR. This lecture provides an introduction for AR Technology. This was taught by Mark Billinghurst at the University of South Australia in 2022.
Lecture 2 in the 2022 COMP 4010 Lecture series on AR/VR and XR. This lecture is about human perception for AR/VR/XR experiences. This was taught by Mark Billinghurst at the University of South Australia in 2022.
Lecture 1 for the 2022 COMP 4010 course on AR and VR. This course was taught by Mark Billinghurst at the University of South Australia in 2022. This lecture provides an introduction to AR, VR and XR.
Empathic Computing and Collaborative Immersive AnalyticsMark Billinghurst
Short talk by Mark Billinghurst on Empathic Computing and Collaborative Immersive Analytics, presented on July 28th 2022 at the Siggraph 2022 conference.
Lecture given by Mark Billinghurst on June 18th 2022 about how the Metaverse can be used for corporate training. In particular how combining AR, VR and other Metaverse elements can be used to provide new types of learning experiences.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
3. AR Interaction
Designing AR System = Interface Design
Using different input and output technologies
Objective is a high quality of user experience
Ease of use and learning
Performance and satisfaction
7. More terminology
Interaction Device= Input/Output of User
Interface
Interaction Style= category of similar
interaction techniques
Interaction Paradigm
Modality (human sense)
Usability
8. Back to AR
You can see spatially registered AR..
how can you interact with it?
9. Interaction Tasks
2D (from [Foley]):
Selection, Text Entry, Quantify, Position
3D (from [Bowman]):
Navigation (Travel/Wayfinding)
Selection
Manipulation
System Control/Data Input
AR: 2D + 3D Tasks and.. more specific tasks?
[Foley] The Human Factors of Computer Graphics Interaction Techniques Foley, J. D.,V. Wallace & P. Chan. IEEE Computer
Graphics and Applications(Nov.): 13-48. 1984.
[Bowman]: 3D User Interfaces:Theory and Practice D. Bowman, E. Kruijff, J. Laviola, I. Poupyrev Addison Wesley 2005
10. AR Interfaces as Data Browsers
2D/3D virtual objects are
registered in 3D
“VR in Real World”
Interaction
2D/3D virtual viewpoint
control
Applications
Visualization, training
11. AR Information Browsers
Information is registered to
real-world context
Hand held AR displays
Interaction
Manipulation of a window
into information space
Applications
Context-aware information displays
Rekimoto, et al. 1997
13. Current AR Information Browsers
Mobile AR
GPS + compass
Many Applications
Layar
Wikitude
Acrossair
PressLite
Yelp
AR Car Finder
…
14. Junaio
AR Browser from Metaio
http://www.junaio.com/
AR browsing
GPS + compass
2D/3D object placement
Photos/live video
Community viewing
18. Advantages and Disadvantages
Important class of AR interfaces
Wearable computers
AR simulation, training
Limited interactivity
Modification of virtual
content is difficult
Rekimoto, et al. 1997
19. 3D AR Interfaces
Virtual objects displayed in 3D
physical space and manipulated
HMDs and 6DOF head-tracking
6DOF hand trackers for input
Interaction
Viewpoint control
Traditional 3D user interface Kiyokawa, et al. 2000
interaction: manipulation, selection,
etc.
22. Advantages and Disadvantages
Important class of AR interfaces
Entertainment, design, training
Advantages
User can interact with 3D virtual
object everywhere in space
Natural, familiar interaction
Disadvantages
Usually no tactile feedback
User has to use different devices for
virtual and physical objects
Oshima, et al. 2000
23. Augmented Surfaces and
Tangible Interfaces
Basic principles
Virtual objects are
projected on a surface
Physical objects are used
as controls for virtual
objects
Support for collaboration
31. Other Examples
Triangles (Gorbert 1998)
Triangular based story telling
ActiveCube (Kitamura 2000-)
Cubes with sensors
32. Lessons from Tangible Interfaces
Physical objects make us smart
Norman’s “Things that Make Us Smart”
encode affordances, constraints
Objects aid collaboration
establish shared meaning
Objects increase understanding
serve as cognitive artifacts
33. TUI Limitations
Difficult to change object properties
can’t tell state of digital data
Limited display capabilities
projection screen = 2D
dependent on physical display surface
Separation between object and display
ARgroove
34. Advantages and Disadvantages
Advantages
Natural - users hands are used for interacting
with both virtual and real objects.
- No need for special purpose input devices
Disadvantages
Interaction is limited only to 2D surface
- Full 3D interaction and manipulation is difficult
36. Back to the Real World
AR overcomes limitation of TUIs
enhance display possibilities
merge task/display space
provide public and private views
TUI + AR = Tangible AR
Apply TUI methods to AR interface design
37. Space-multiplexed
Many devices each with one function
- Quicker to use, more intuitive, clutter
- Real Toolbox
Time-multiplexed
One device with many functions
- Space efficient
- mouse
46. Advantages and Disadvantages
Advantages
Natural interaction with virtual and physical tools
- No need for special purpose input devices
Spatial interaction with virtual objects
- 3D manipulation with virtual objects anywhere in physical
space
Disadvantages
Requires Head Mounted Display
47. Wrap-up
Browsing Interfaces
simple (conceptually!), unobtrusive
3D AR Interfaces
expressive, creative, require attention
Tangible Interfaces
Embedded into conventional environments
Tangible AR
Combines TUI input + AR display
48. AR User Interface: Categorization
Traditional Desktop: keyboard, mouse,
joystick (with or without 2D/3D GUI)
Specialized/VR Device: 3D VR devices,
specially design device
49. AR User Interface: Categorization
Tangible Interface : using physical object Hand/
Touch Interface : using pose and gesture of hand,
fingers
Body Interface: using movement of body
52. Keyboard and Mouse Interaction
Traditional input techniques
OSG provides a framework for handling keyboard
and mouse input events (osgGA)
1. Subclass osgGA::GUIEventHandler
2. Handle events:
• Mouse up / down / move / drag / scroll-wheel
• Key up / down
3. Add instance of new handler to the viewer
53. Keyboard and Mouse Interaction
Create your own event handler class
class KeyboardMouseEventHandler : public osgGA::GUIEventHandler {
public:
KeyboardMouseEventHandler() : osgGA::GUIEventHandler() { }
virtual bool handle(const osgGA::GUIEventAdapter& ea,osgGA::GUIActionAdapter& aa,
osg::Object* obj, osg::NodeVisitor* nv) {
switch (ea.getEventType()) {
// Possible events we can handle
case osgGA::GUIEventAdapter::PUSH: break;
case osgGA::GUIEventAdapter::RELEASE: break;
case osgGA::GUIEventAdapter::MOVE: break;
case osgGA::GUIEventAdapter::DRAG: break;
case osgGA::GUIEventAdapter::SCROLL: break;
case osgGA::GUIEventAdapter::KEYUP: break;
case osgGA::GUIEventAdapter::KEYDOWN: break;
}
return false;
}
};
Add it to the viewer to receive events
viewer.addEventHandler(new KeyboardMouseEventHandler());
54. Keyboard Interaction
Handle W,A,S,D keys to move an object
case osgGA::GUIEventAdapter::KEYDOWN: {
switch (ea.getKey()) {
case 'w': // Move forward 5mm
localTransform->preMult(osg::Matrix::translate(0, -5, 0));
return true;
case 's': // Move back 5mm
localTransform->preMult(osg::Matrix::translate(0, 5, 0));
return true;
case 'a': // Rotate 10 degrees left
localTransform->preMult(osg::Matrix::rotate(osg::DegreesToRadians(10.0f), osg::Z_AXIS));
return true;
case 'd': // Rotate 10 degrees right
localTransform->preMult(osg::Matrix::rotate(osg::DegreesToRadians(-10.0f), osg::Z_AXIS));
return true;
case ' ': // Reset the transformation
localTransform->setMatrix(osg::Matrix::identity());
return true;
}
break;
localTransform = new osg::MatrixTransform();
localTransform->addChild(osgDB::readNodeFile("media/car.ive"));
arTransform->addChild(localTransform.get());
56. Mouse Interaction
Mouse is pointing device…
Use mouse to select objects in an AR scene
OSG provides methods for ray-casting and
intersection testing
Return an osg::NodePath (the path from the hit
node all the way back to the root)
Projection
Plane (screen) scene
57. Mouse Interaction
Compute the list of nodes under the clicked position
Invoke an action on nodes that are hit, e.g. select, delete
case osgGA::GUIEventAdapter::PUSH:
osgViewer::View* view = dynamic_cast<osgViewer::View*>(&aa);
osgUtil::LineSegmentIntersector::Intersections intersections;
// Clear previous selections
for (unsigned int i = 0; i < targets.size(); i++) {
targets[i]->setSelected(false);
}
// Find new selection based on click position
if (view && view->computeIntersections(ea.getX(), ea.getY(), intersections)) {
for (osgUtil::LineSegmentIntersector::Intersections::iterator iter = intersections.begin();
iter != intersections.end(); iter++) {
if (Target* target = dynamic_cast<Target*>(iter->nodePath.back())) {
std::cout << "HIT!" << std::endl;
target->setSelected(true);
return true;
}
}
}
break;
60. Single Marker Techniques: Proximity
Use distance from camera to marker as
input parameter
e.g. Lean in close to examine
Can use the osg::LOD class to show
different content at different depth
ranges Image: OpenSG Consortium
61. Single Marker Techniques: Proximity
// Load some models
osg::ref_ptr<osg::Node> farNode = osgDB::readNodeFile("media/far.osg");
osg::ref_ptr<osg::Node> closerNode = osgDB::readNodeFile("media/closer.osg");
osg::ref_ptr<osg::Node> nearNode = osgDB::readNodeFile("media/near.osg");
// Use a Level-Of-Detail node to show each model at different distance ranges.
osg::ref_ptr<osg::LOD> lod = new osg::LOD();
lod->addChild(farNode.get(), 500.0f, 10000.0f); // Show the "far" node from 50cm to 10m away
lod->addChild(closerNode.get(), 200.0f, 500.0f); // Show the "closer" node from 20cm to 50cm away
lod->addChild(nearNode.get(), 0.0f, 200.0f); // Show the "near" node from 0cm to 2cm away
arTransform->addChild(lod.get());
Define depth ranges for each node
Add as many as you want
Ranges can overlap
63. Multiple Marker Concepts
Interaction based on the relationship between
markers
e.g. When the distance between two markers
decreases below threshold invoke an action
Tangible User Interface
Applications:
Memory card games
File operations
64. Multiple Marker Proximity
Virtual
Camera
Transform A Transform B
Distance > Threshold
Switch A Switch B
Model Model Model Model
A1 A2 B1 B2
65. Multiple Marker Proximity
Virtual
Camera
Transform A Transform B
Distance <= Threshold
Switch A Switch B
Model Model Model Model
A1 A2 B1 B2
66. Multiple Marker Proximity
Use a node callback to test for proximity and update the relevant nodes
virtual void operator()(osg::Node* node, osg::NodeVisitor* nv) {
if (mMarkerA != NULL && mMarkerB != NULL && mSwitchA != NULL && mSwitchB != NULL) {
if (mMarkerA->valid() && mMarkerB->valid()) {
osg::Vec3 posA = mMarkerA->getTransform().getTrans();
osg::Vec3 posB = mMarkerB->getTransform().getTrans();
osg::Vec3 offset = posA - posB;
float distance = offset.length();
if (distance <= mThreshold) {
if (mSwitchA->getNumChildren() > 1) mSwitchA->setSingleChildOn(1);
if (mSwitchB->getNumChildren() > 1) mSwitchB->setSingleChildOn(1);
} else {
if (mSwitchA->getNumChildren() > 0) mSwitchA->setSingleChildOn(0);
if (mSwitchB->getNumChildren() > 0) mSwitchB->setSingleChildOn(0);
}
}
}
traverse(node,nv);
}
68. Paddle Interaction
Use one marker as a tool for selecting and
manipulating objects (tangible user interface)
Another marker provides a frame of reference
A grid of markers can alleviate problems with occlusion
MagicCup (Kato et al) VOMAR (Kato et al)
69. Paddle Interaction
Often useful to adopt a local coordinate system
Allows the camera
to move without
disrupting Tlocal
Places the paddle in
the same coordinate
system as the
content on the grid
Simplifies interaction
osgART computes Tlocal using the osgART::LocalTransformationCallback
70. Tilt and Shake Interaction
Detect types of paddle movement:
Tilt
- gradual change in orientation
Shake
- short, sudden changes in translation
74. Local vs. Global Interactions
Local
Actions determined from single camera to marker
transform
- shaking, appearance, relative position, range
Global
Actions determined from two relationships
- marker to camera, world to camera coords.
- Marker transform determined in world coordinates
• object tilt, absolute position, absolute rotation, hitting
75. Range-based Interaction
Sample File: RangeTest.c
/* get the camera transformation */
arGetTransMat(&marker_info[k], marker_center,
marker_width, marker_trans);
/* find the range */
Xpos = marker_trans[0][3];
Ypos = marker_trans[1][3];
Zpos = marker_trans[2][3];
range = sqrt(Xpos*Xpos+Ypos*Ypos+Zpos*Zpos);
77. Finding Multiple Transforms
Create object list
ObjectData_T *object;
Read in objects - in init( )
read_ObjData( char *name, int *objectnum );
Find Transform – in mainLoop( )
for( i = 0; i < objectnum; i++ ) {
..Check patterns
..Find transforms for each marker
}
78. Drawing Multiple Objects
Send the object list to the draw function
draw( object, objectnum );
Draw each object individually
for( i = 0; i < objectnum; i++ ) {
if( object[i].visible == 0 ) continue;
argConvGlpara(object[i].trans, gl_para);
draw_object( object[i].id, gl_para);
}
79. Proximity Based Interaction
Sample File – CollideTest.c
Detect distance between markers
checkCollisions(object[0],object[1], DIST)
If distance < collide distance
Then change the model/perform interaction
80. Multi-marker Tracking
Sample File – multiTest.c
Multiple markers to establish a
single coordinate frame
Reading in a configuration file
Tracking from sets of markers
Careful camera calibration
81. MultiMarker Configuration File
Sample File - Data/multi/marker.dat
Contains list of all the patterns and their exact
positions
#the number of patterns to be recognized
6
Pattern File
#marker 1
Pattern Width +
Data/multi/patt.a
Coordinate Origin
40.0
0.0 0.0 Pattern Transform
1.0000 0.0000 0.0000 -100.0000 Relative to Global
0.0000 1.0000 0.0000 50.0000 Origin
0.0000 0.0000 1.0000 0.0000
…
82. Camera Transform Calculation
Include <AR/arMulti.h>
Link to libARMulti.lib
In mainLoop()
Detect markers as usual
arDetectMarkerLite(dataPtr, thresh,
&marker_info, &marker_num)
Use MultiMarker Function
if( (err=arMultiGetTransMat(marker_info,
marker_num, config)) <
0 ) {
argSwapBuffers();
return;
}
84. Paddle Interaction Code
Sample File – PaddleDemo.c
Get paddle marker location + draw paddle before drawing
background model
paddleGetTrans(paddleInfo, marker_info,
marker_flag, marker_num, &cparam);
/* draw the paddle */
if( paddleInfo->active ){
draw_paddle( paddleInfo);
}
draw_paddle uses a Stencil Buffer to increase realism
85. Paddle Interaction Code II
Sample File – paddleDrawDemo.c
Finds the paddle position relative to global coordinate frame:
setBlobTrans(Num,paddle_trans[3][4],base_trans[3][4])
Sample File – paddleTouch.c
Finds the paddle position:
findPaddlePos(&curPadPos,paddleInfo->trans,config->trans);
Checks for collisions:
checkCollision(&curPaddlePos,myTarget[i].pos,20.0)
86. General Tangible AR Library
command_sub.c, command_sub.h
Contains functions for recognizing a range of
different paddle motions:
int check_shake( );
int check_punch( );
int check_incline( );
int check_pickup( );
int check_push( );
Eg: to check angle between paddle and base
check_incline(paddle->trans, base->trans, &ang)