The document discusses research directions for augmented reality, including developing improved tracking, display, and input technologies; creating tools for authoring AR content; developing novel AR applications and interaction techniques; and exploring new types of AR experiences through user evaluations. It also examines specific challenges like occlusion handling in see-through displays and examples of gesture and multimodal interaction research.
This is the COSC 426 Lecture 4 on Designing AR Interfaces. Taught by Mark Billinghurst from the HIT Lab NZ at the University of Canterbury. This is part of his graduate course on Augmented Reality. Taught on August 2nd 2013
Lecture 5 from the COSC 426 Graduate course on Augmented Reality. This lecture talks about AR development tools and interaction styles. Taught by Mark Billinghurst from the HIT Lab NZ at the University of Canterbury. August 9th 2013
2013 Lecture 6: AR User Interface Design GuidelinesMark Billinghurst
COSC 426 Lecture 6: on AR User Interface Design Guidelines. Lecture taught by Mark Billinghurst from the HIT Lab NZ at the University of Canterbury on August 16th 2013
COSC 426 Lecture 1: Introduction to Augmented RealityMark Billinghurst
This is the first lecture of the COSC 426 graduate course on Augmented Reality taught at the University of Canterbury. It was taught by Mark Billinghurst on July 17th 2014. It covers a basic introduction to Augmented Reality.
This is the COSC 426 Lecture 4 on Designing AR Interfaces. Taught by Mark Billinghurst from the HIT Lab NZ at the University of Canterbury. This is part of his graduate course on Augmented Reality. Taught on August 2nd 2013
Lecture 5 from the COSC 426 Graduate course on Augmented Reality. This lecture talks about AR development tools and interaction styles. Taught by Mark Billinghurst from the HIT Lab NZ at the University of Canterbury. August 9th 2013
2013 Lecture 6: AR User Interface Design GuidelinesMark Billinghurst
COSC 426 Lecture 6: on AR User Interface Design Guidelines. Lecture taught by Mark Billinghurst from the HIT Lab NZ at the University of Canterbury on August 16th 2013
COSC 426 Lecture 1: Introduction to Augmented RealityMark Billinghurst
This is the first lecture of the COSC 426 graduate course on Augmented Reality taught at the University of Canterbury. It was taught by Mark Billinghurst on July 17th 2014. It covers a basic introduction to Augmented Reality.
The second lecture in the 426 graduate class on Augmented Reality taught thy Mark Billinghurst at the HIT Lab NZ, University of Canterbury. The class was taught on July 19th 2013
A lecture on Mobile Augmented Reality. A lecture given by Mark Billinghurst at the University of Canterbury on Friday September 13th 2013. This is part of the COSC 426 graduate course on Augmented Reality.
Hands and Speech in Space: Multimodal Input for Augmented Reality Mark Billinghurst
A keynote talk given by Mark Billinghurst at the ICMI 2013 conference, December 12th 2013. The talk is about how to use speech and gesture interaction with Augmented Reality interfaces.
Lecture 9 of the COMP 4010 course in AR/VR from the University of South Australia. This was taught by Mark Billinghurst on October 5th, 2021. This lecture describes VR input devices, VR systems and rapid prototyping tools.
User Interfaces and User Centered Design Techniques for Augmented Reality and...Stuart Murphy
We chose to explore virtual and augmented reality (VR and AR) due to its recent emergence into the mainstream areas of gaming, mobile applications and various other systems. We felt it important to distinguish between VR and AR in both areas of interaction design and user interface evaluation and creation techniques. As it is a topic of great passion for us we wanted to instill the possibilities that this medium has to offer for interaction designers and UI developers.
Talk given by Mark Billinghurst at the DIGI_X conference in Auckland, New Zealand on June 21st 2018. The talk was about how Mixed Reality can be applied in the work place.
Presentation given by Mark Billinghurst on research into Empathic Glasses. Combining Augmented Reality, Wearable Computers, Emotion Sensing and Remote Collaboration. Given on February 18th 2016.
COMP 4010 lecture on AR Interaction Design. Lecture given by Gun Lee at the University of South Australia on October 12th 2017, from slides prepared by Mark Billinghurst
COMP 4010 Lecture 5 on Interaction Design for Virtual Reality. Taught by Gun Lee on August 21st 2018 at the University of South Australia. Slides by Mark Billinghurst
Lecture 4 from the COSC 426 graduate class on Augmented Reality. Taught by Mark Billinghurst from the HIT Lab NZ at the University of Canterbury. August 1st 2012
keynote speech by Mark Billinghurst at the Workshop on Transitional Interfaces in Mixed and Cross-Reality, at the ACM ISS 2021 Conference. Given on November 14th 2021
This presentation "Virtual Reality" is based on a paper "An Approach to Consistent Displayingof Virtual Reality Moving Objects"
Author : Renoy Reji
Christ University
Bengaluru-560029
email : renoyreji@gmail.com
Did a crash course in User Experience for participants at the iCube Innovation startup bootcamp. Credit to Mark Billinghurst and Aga Szostek for their knowledge (and slides).
COMP 4010 Lecture7 3D User Interfaces for Virtual RealityMark Billinghurst
Lecture 7 of the COMP 4010 course in Virtural Reality. This lecture was about 3D User Interfaces for Virtual Reality. The lecture was taught by Mark Billinghurst on September 13th 2016 at the University of South Australia.
COMP lecture 4 given by Bruce Thomas on August 16th 2017 at the University of South Australia about 3D User Interfaces for VR. Slides prepared by Mark Billinghurst.
Presentation on our 3-month research and prototyping project in augmented reality for mobile phones. Presented at MEIC5 event (Mobile Experience Innovation Center) at the Ontario College of Art and Design, Toronto, Canada. November 26, 2009.
The second lecture in the 426 graduate class on Augmented Reality taught thy Mark Billinghurst at the HIT Lab NZ, University of Canterbury. The class was taught on July 19th 2013
A lecture on Mobile Augmented Reality. A lecture given by Mark Billinghurst at the University of Canterbury on Friday September 13th 2013. This is part of the COSC 426 graduate course on Augmented Reality.
Hands and Speech in Space: Multimodal Input for Augmented Reality Mark Billinghurst
A keynote talk given by Mark Billinghurst at the ICMI 2013 conference, December 12th 2013. The talk is about how to use speech and gesture interaction with Augmented Reality interfaces.
Lecture 9 of the COMP 4010 course in AR/VR from the University of South Australia. This was taught by Mark Billinghurst on October 5th, 2021. This lecture describes VR input devices, VR systems and rapid prototyping tools.
User Interfaces and User Centered Design Techniques for Augmented Reality and...Stuart Murphy
We chose to explore virtual and augmented reality (VR and AR) due to its recent emergence into the mainstream areas of gaming, mobile applications and various other systems. We felt it important to distinguish between VR and AR in both areas of interaction design and user interface evaluation and creation techniques. As it is a topic of great passion for us we wanted to instill the possibilities that this medium has to offer for interaction designers and UI developers.
Talk given by Mark Billinghurst at the DIGI_X conference in Auckland, New Zealand on June 21st 2018. The talk was about how Mixed Reality can be applied in the work place.
Presentation given by Mark Billinghurst on research into Empathic Glasses. Combining Augmented Reality, Wearable Computers, Emotion Sensing and Remote Collaboration. Given on February 18th 2016.
COMP 4010 lecture on AR Interaction Design. Lecture given by Gun Lee at the University of South Australia on October 12th 2017, from slides prepared by Mark Billinghurst
COMP 4010 Lecture 5 on Interaction Design for Virtual Reality. Taught by Gun Lee on August 21st 2018 at the University of South Australia. Slides by Mark Billinghurst
Lecture 4 from the COSC 426 graduate class on Augmented Reality. Taught by Mark Billinghurst from the HIT Lab NZ at the University of Canterbury. August 1st 2012
keynote speech by Mark Billinghurst at the Workshop on Transitional Interfaces in Mixed and Cross-Reality, at the ACM ISS 2021 Conference. Given on November 14th 2021
This presentation "Virtual Reality" is based on a paper "An Approach to Consistent Displayingof Virtual Reality Moving Objects"
Author : Renoy Reji
Christ University
Bengaluru-560029
email : renoyreji@gmail.com
Did a crash course in User Experience for participants at the iCube Innovation startup bootcamp. Credit to Mark Billinghurst and Aga Szostek for their knowledge (and slides).
COMP 4010 Lecture7 3D User Interfaces for Virtual RealityMark Billinghurst
Lecture 7 of the COMP 4010 course in Virtural Reality. This lecture was about 3D User Interfaces for Virtual Reality. The lecture was taught by Mark Billinghurst on September 13th 2016 at the University of South Australia.
COMP lecture 4 given by Bruce Thomas on August 16th 2017 at the University of South Australia about 3D User Interfaces for VR. Slides prepared by Mark Billinghurst.
Presentation on our 3-month research and prototyping project in augmented reality for mobile phones. Presented at MEIC5 event (Mobile Experience Innovation Center) at the Ontario College of Art and Design, Toronto, Canada. November 26, 2009.
Designing Augmented Reality Experiences for MobileTryMyUI
In this webinar, Professor Ed Johnston of Kean University shares best practices in creating augmented reality experiences and 360-degree imaging content for mobile. Ed and his research team at Kean University have created interactive posters and 360-degree images at Liberty Hall Museum, NJ, and he has had his creative works presented in Hong Kong, NY, and LA. Here, he talks about the process in the making and answers audience's questions on AR, VR, and 3D media.
Access the full webinar video here: http://goo.gl/UpfMiq
Augmentet Reality, Smart Cities - Quo Vadis, DigitalisierungMatthias Stürmer
Matthias Stürmer
Geschäftsleiter der Parlamentarischen Gruppe Digitale Nachhaltigkeit & Leiter der Forschungsstelle Digitale Nachhaltigkeit am Institut für Wirtschaftsinformatik der Universität Bern. Dort befasst er sich als Oberassistent in der Lehre, Forschung und Beratung mit Open Source Software, Open Data, Open Government und Netzpolitik.
Designing the future of Augmented RealityCarina Ngai
Presented on March 4th, 2016 at Interaction16 in Helsinki, Finland.
Until now, augmented reality has so far been mostly a sci-fi vision that overlays visual information to what we see in the physical world. It’s widely perceived as a “cool and interesting feature” for brands and advertising, but doesn’t have much practicality yet. To harness the real power of AR, which includes geolocation, image recognition, we believe that a more utilitarian visual search would be next.
To design for such possibilities, we begin to question even the fundamental basis of AR. For example, what would AR become beyond a rich visual layer? Will this change people’s motivation and behavior to use AR? How can we redefine AR to be a tool to give augmented information on objects? And how we can speculate its usage in the future?
The utilisation of mobile augmented reality to display gallery
artworks or museum content in novel ways is a well-established
concept in the augmented reality research community. However,
the focus of these systems is generally technologically driven or
only addresses the end user and not the views of the gallery or the
original artist. In this paper we discuss the design and
development of the mobile application ‘Taking the Artwork
Home’, which allows people to digitally curate their own
augmented reality art exhibitions in their own homes by digitally
‘replacing’ the pictures they have on their walls with content from
the Peter Scott Gallery in Lancaster. In particular, we present the
insights gained from a research through design methodology that
allowed us to consider how the views of the gallery and artists
impacted on the system design and therefore the user experience.
Thus the final artifact is the result of an iterative evaluation
process with over 100 users representing a broad range of
demographics and continues to be evaluated/enhanced by
observing its operation ‘in the wild’. Further, we consider the
effect the project has had on gallery practices to enable both
augmented reality designers, and galleries and museums to
maximise the potential application of the technology when
working together on such projects.
Designing for an Augmented Reality worldthomas.purves
How “Augmented Reality” and the mobile web changes everything
Mobile broadband access and ever-smarter phones are shaking the internet out its lofty cloud and bringing the web into the real world. As a result, the old “real world”, and many old ideas and many old business models will be running out of places to hide from the pervasive influence of the net.
Meanwhile, each of our smart phones are in many ways even better than the old clunky tools we used to use to surf the net. Our mobile devices are not only connected but, also bristling with sensors like radios, cameras, microphones, GPS etc. that can directly perceive and interact with the world around you. We’re reaching a point where it’s theoretically possible to point that device at almost anything: a landmark, a product on a store shelf, your friends or a crowd of people; and draw from the cloud and your social graph as much, or perhaps more, relevant information than you ever wanted to know. Oh, and the cloud will be watching you and whatever’s around you as well.
In the new augmented reality, the web surfs you.
The goal of this talk will be to provide you with a fast paced overview of what this new “augmented” reality will mean for how we socialize, for how we sell and market physical products, for architecture, for media and entertainment, for public policy, crime, privacy and, as well, few early signals for what might be the new killer apps.
If all that is not interesting enough, I will also bring free beer.
Lecture 10 from a course on Mobile Based Augmented Reality Development taught by Mark Billinghurst and Zi Siang See on November 29th and 30th 2015 at Johor Bahru in Malaysia. This lecture provides an overview of research directions in Mobile AR. Look for the other 9 lectures in the course.
Natural Interaction for Augmented Reality ApplicationsMark Billinghurst
Keynote talk giving by Mark Billinghurst from the HIT Lab NZ at the IVCNZ 2013 conference, November 28th 2013. The talk focuses on Natural Interaction with Augmented Reality applications using speech and gesture and demonstrates some of the projects in this area developed by the HIT Lab NZ.
A lecture on research directions in Augmented Reality as part of the COSC 426 class on AR. Taught by Mark Billinghurst from the HIT Lab NZ at the University of Canterbury.
The fifth lecture from the Augmented Reality Summer School taught by Mark Billinghurst at the University of South Australia, February 15th - 19th, 2016. This provides an overview of AR research directions.
Invited talk held at the IEEE conference Hummanoids'06 in Genoa, Italy about Microsoft's recently released preview of the Microsoft Robotics Studio. Including a scalable runtime architecture supporting 8-bit up to 32-bit technologies with multi-core processors, the system provides access to simple touch sensors up to complex laser range finders. A 3D visual simulation environment and open architecture allows easy integration and simulation of newly constructed hardware platforms. While a wide variety of commercial and academic robots are already integrated within the Robotics Studio, there is a lack of humanoid platforms, yet. In this talk we will discuss the design of the software platform and the requirements to be met to integrate humanoid robots in the future.
MIT 6.870 - Template Matching and Histograms (Nicolas Pinto, MIT)npinto
MIT 6.870 Object Recognition and Scene Understanding (Fall 2008)
http://people.csail.mit.edu/torralba/courses/6.870/6.870.recognition.htm
This class will review and discuss current approaches to object recognition and scene understanding in computer vision. The course will cover bag of words models, part based models, classifier based models, multiclass object recognition and transfer learning, concurrent recognition and segmentation, context models for object recognition, grammars for scene understanding and large datasets for semi supervised and unsupervised discovery of object and scene categories. We will be reading a mixture of papers from computer vision and influential works from cognitive psychology on object and scene recognition.
Some of my recent research topics at the Meta-Perception group at the Ishikawa-Watanabe laboratory (http://www.k2.t.u-tokyo.ac.jp/index-e.html)
- The Physical Cloud
- Zero-delay, Zero-mismatch spatial AR with Laser Sensing Display
- Augmented Perception
(Link to videos in the comments)
Lecture 8 of the COMP 4010 course taught at the University of South Australia. This lecture provides and introduction to VR technology. Taught by Mark Billinghurst on September 14th 2021 at the University of South Australia.
Attented for a semester, with an Erasmus scholarship for my diploma Thesis : In hand object modeling for robotic perception using ROS (Robot Operating System), OpenCV/C++.
The subject was to develop a technique that enables robots to autonomously acquire models of unknown objects, thereby increasing understanding. Ultimately,such a capability will allow robots to actively investigate their environments and learn about objects in an incremental way, adding more and more knowledge over time. Equipped with these techniques, robots can become experts in their respective environments and share information with other robots, thereby allowing for rapid progress in robotic capabilities.
More details: http://www.ros.org/wiki/model_completion
Keynote talk by Mark Billinghurst at the 9th XR-Metaverse conference in Busan, South Korea. The talk was given on May 20th, 2024. It talks about progress on achieving the Metaverse vision laid out in Neil Stephenson's book, Snowcrash.
These are slides from the Defence Industry event orgranized by the Australian Research Centre for Interactive and Virtual Environments (IVE). This was held on April 18th 2024, and showcased IVE research capabilities to the South Australian Defence industry.
This is a guest lecture given by Mark Billinghurst at the University of Sydney on March 27th 2024. It discusses some future research directions for Augmented Reality.
Presentation given by Mark Billinghurst at the 2024 XR Spring Summer School on March 7 2024. This lecture talks about different evaluation methods that can be used for Social XR/AR/VR experiences.
Empathic Computing: Delivering the Potential of the MetaverseMark Billinghurst
Invited guest lecture by Mark Billingurust given at the MIT Media Laboratory on November 21st 2023. This was given as part of Professor Hiroshi Ishii's class on Tangible Media
Talk to Me: Using Virtual Avatars to Improve Remote CollaborationMark Billinghurst
A talk given by Mark Billinging in the CLIPE workshop in Tubingen, Germant on April 27th 2023. This talk describes how virtual avatars can be used to support remote collaboration.
Empathic Computing: Designing for the Broader MetaverseMark Billinghurst
Keynote talk given by Mark Billinghurst at the CHI 2023 Workshop on Towards and Inclusive and Accessible Metaverse. The talk was given on April 23rd 2023.
Lecture 6 of the COMP 4010 course on AR/VR. This lecture is about designing AR systems. This was taught by Mark Billinghurst at the University of South Australia on September 1st 2022.
Keynote speech given by Mark Billinghurst at the ISS 2022 conference. Presented on November 22nd, 2022. This keynote outlines some research opportunities in the Metaverse.
Lecture 5 in the 2022 COMP 4010 lecture series. This lecture is about AR prototyping tools and techniques. The lecture was given by Mark Billinghurst from University of South Australia in 2022.
Lecture 4 in the 2022 COMP 4010 lecture series on AR/VR. This lecture is about AR Interaction techniques. This was taught by Mark Billinghurst at the University of South Australia in 2022.
Lecture 3 in the 2022 COMP 4010 lecture series on AR/VR. This lecture provides an introduction for AR Technology. This was taught by Mark Billinghurst at the University of South Australia in 2022.
Lecture 2 in the 2022 COMP 4010 Lecture series on AR/VR and XR. This lecture is about human perception for AR/VR/XR experiences. This was taught by Mark Billinghurst at the University of South Australia in 2022.
Lecture 1 for the 2022 COMP 4010 course on AR and VR. This course was taught by Mark Billinghurst at the University of South Australia in 2022. This lecture provides an introduction to AR, VR and XR.
Empathic Computing and Collaborative Immersive AnalyticsMark Billinghurst
Short talk by Mark Billinghurst on Empathic Computing and Collaborative Immersive Analytics, presented on July 28th 2022 at the Siggraph 2022 conference.
Lecture given by Mark Billinghurst on June 18th 2022 about how the Metaverse can be used for corporate training. In particular how combining AR, VR and other Metaverse elements can be used to provide new types of learning experiences.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
3. The Future is with us
It takes at least 20 years for new
technologies to go from the lab to the
lounge..
“The technologies that will significantly affect
our lives over the next 10 years have been
around for a decade.
The future is with us.The trick is learning how to
spot it.The commercialization of research,
in other words, is far more about Oct 11th 2004
prospecting than alchemy.”
Bill Buxton
7. Occlusion with See-through HMD
The Problem
Occluding real objects with virtual
Occluding virtual objects with real
Real Scene Current See-through HMD
10. ELMO Design
Virtual images
from LCD
Depth
Sensing LCD Mask
Real
World
Optical
Combiner
Use LCD mask to block real world
Depth sensing for occluding virtual images
24. Lucid Touch
Microsoft Research & Mitsubishi Electric Research Labs
Wigdor, D., Forlines, C., Baudisch, P., Barnwell, J., Shen, C.
LucidTouch: A See-Through Mobile Device
In Proceedings of UIST 2007, Newport, Rhode Island, October 7-10, 2007,
pp. 269–278.
30. Haptic Modalities
Haptic interfaces
Simple uses in mobiles? (vibration instead of ringtone)
Sony’s Touchengine
- physiological experiments show you can perceive two stimulus 5ms
apart, and spaced as low as 0.2 microns
4 µm n層
28 µm
n層
V
38. AR MicroMachines
AR experience with environment awareness
and physically-based interaction
Based on MS Kinect RGB-D sensor
Augmented environment supports
occlusion, shadows
physically-based interaction between real and
virtual objects
41. System Flow
The system flow consists of three sections:
Image Processing and Marker Tracking
Physics Simulation
Rendering
42. Physics Simulation
Create virtual mesh over real world
Update at 10 fps – can move real objects
Use by physics engine for collision detection (virtual/real)
Use by OpenScenegraph for occlusion and shadows
45. Mo#va#on
AR
MicroMachines
and
PhobiAR
•
Treated
the
environment
as
sta/c
–
no
tracking
•
Tracked
objects
in
2D
More
realis#c
interac#on
requires
3D
gesture
tracking
46. Mo#va#on
Occlusion
Issues
AR
MicroMachines
only
achieved
realis/c
occlusion
because
the
user’s
viewpoint
matched
the
Kinect’s
Proper
occlusion
requires
a
more
complete
model
of
scene
objects
48. HITLabNZ’s Gesture Library
Architecture
5. Gesture o Supports PCL, OpenNI, OpenCV, and Kinect SDK.
o Provides access to depth, RGB, XYZRGB.
• Static Gestures o Usage: Capturing color image, depth image and concatenated
• Dynamic Gestures point clouds from a single or multiple cameras
o For example:
• Context based Gestures
4. Modeling
• Hand recognition/
modeling Kinect for Xbox 360
• Rigid-body modeling
3. Classification/Tracking Kinect for Windows
2. Segmentation
Asus Xtion Pro Live
1. Hardware Interface
49. HITLabNZ’s Gesture Library
Architecture
5. Gesture o Segment images and point clouds based on color, depth and
space.
• Static Gestures o Usage: Segmenting images or point clouds using color
• Dynamic Gestures models, depth, or spatial properties such as location, shape
and size.
• Context based Gestures o For example:
4. Modeling
• Hand recognition/
modeling
• Rigid-body modeling Skin color segmentation
3. Classification/Tracking
2. Segmentation Depth threshold
1. Hardware Interface
50. HITLabNZ’s Gesture Library
Architecture
5. Gesture o Identify and track objects between frames based on
XYZRGB.
• Static Gestures o Usage: Identifying current position/orientation of the tracked
• Dynamic Gestures object in space.
• Context based Gestures o For example:
4. Modeling
• Hand recognition/ Training set of hand
modeling poses, colors
• Rigid-body modeling represent unique
regions of the hand.
3. Classification/Tracking
2. Segmentation Raw output (without-
cleaning) classified
on real hand input
1. Hardware Interface
(depth image).
51. HITLabNZ’s Gesture Library
Architecture
5. Gesture o Hand Recognition/Modeling
Skeleton based (for low resolution
• Static Gestures approximation)
• Dynamic Gestures Model based (for more accurate
• Context based Gestures representation)
o Object Modeling (identification and tracking rigid-
4. Modeling body objects)
• Hand recognition/ o Physical Modeling (physical interaction)
modeling Sphere Proxy
• Rigid-body modeling Model based
Mesh based
3. Classification/Tracking o Usage: For general spatial interaction in AR/VR
environment
2. Segmentation
1. Hardware Interface
52. Method
Represent
models
as
collec#ons
of
spheres
moving
with
the
models
in
the
Bullet
physics
engine
53. Method
Render
AR
scene
with
OpenSceneGraph,
using
depth
map
for
occlusion
Shadows
yet
to
be
implemented
57. Multimodal Interaction
Combined speech input
Gesture and Speech complimentary
Speech
- modal commands, quantities
Gesture
- selection, motion, qualities
Previous work found multimodal interfaces
intuitive for 2D/3D graphics interaction
58. 1. Marker Based Multimodal Interface
Add speech recognition to VOMAR
Paddle + speech commands
59.
60. Commands Recognized
Create Command "Make a blue chair": to create a virtual
object and place it on the paddle.
Duplicate Command "Copy this": to duplicate a virtual object
and place it on the paddle.
Grab Command "Grab table": to select a virtual object and
place it on the paddle.
Place Command "Place here": to place the attached object in
the workspace.
Move Command "Move the couch": to attach a virtual object
in the workspace to the paddle so that it follows the paddle
movement.
71. Results
Average performance time (MMI, speech fastest)
Gesture: 15.44s
Speech: 12.38s
Multimodal: 11.78s
No difference in user errors
User subjective survey
Q1: How natural was it to manipulate the object?
- MMI, speech significantly better
70% preferred MMI, 25% speech only, 5% gesture only
73. Intelligent Interfaces
Most AR systems stupid
Don’t recognize user behaviour
Don’t provide feedback
Don’t adapt to user
Especially important for training
Scaffolded learning
Moving beyond check-lists of actions
74. Intelligent Interfaces
AR interface + intelligent tutoring system
ASPIRE constraint based system (from UC)
Constraints
- relevance cond., satisfaction cond., feedback
78. Evaluation Results
16 subjects, with and without ITS
Improved task completion
Improved learning
79. Intelligent Agents
AR characters
Virtual embodiment of system
Multimodal input/output
Examples
AR Lego, Welbo, etc
Mr Virtuoso
- AR character more real, more fun
- On-screen 3D and AR similar in usefulness
95. Example: Visualizing Sensor Networks
Rauhala et. al. 2007 (Linkoping)
Network of Humidity Sensors
ZigBee wireless communication
Use Mobile AR to Visualize Humidity
103. Hybrid User Inerfaces
Goal: To incorporate AR into normal meeting
environment
Physical Components
Real props
Display Elements
2D and 3D (AR) displays
Interaction Metaphor
Use multiple tools – each relevant for the task
104. Hybrid User Interfaces
1 2 3 4
PERSONAL TABLETOP WHITEBOARD MULTIGROUP
Private Display Private Display Private Display Private Display
Group Display Public Display Group Display
Public Display
105. Ubiquitous
UbiComp
Ubi AR
Ubi VR
Weiser Mobile AR
Desktop AR VR
Terminal
Reality Virtual Reality
Milgram
From: Joe Newman
106. Massive
Multi User
Ubiquitous
r
Weise
Terminal
Single User
Reality
Milg
ram
VR
116. Future Directions
SLIDE 116
Massive Multiuser
Handheld AR for the first time allows extremely high
numbers of AR users
Requires
New types of applications/games
New infrastructure (server/client/peer-to-peer)
Content distribution…
121. Leveraging Web 2.0
Content retrieval using HTTP
XML encoded meta information
KML placemarks + extensions
Queries
Based on location (from GPS, image recognition)
Based on situation (barcode markers)
Queries also deliver tracking feature databases
Everybody can set up an AR 2.0 server
Syndication:
Community servers for end-user content
Tagging
AR client subscribes to arbitrary number of feeds
122. Content
Content creation and delivery
Content creation pipeline
Delivering previously unknown content
Streaming of
Data (objects, multi-media)
Applications
Distribution
How do users learn about all that content?
How do they access it?
128. AR Research in the HIT Lab NZ
Gesture interaction
Gesture library
Multimodal interaction
Collaborative speech/gesture interfaces
Mobile AR interfaces
Outdoor AR, interaction methods, navigation tools
AR authoring tools
Visual programming for AR
Remote Collaboration
Mobile AR for remote interaction
129. More Information
• Mark Billinghurst
– mark.billinghurst@hitlabnz.org
• Websites
– http://www.hitlabnz.org/
– http://artoolkit.sourceforge.net/
– http://www.osgart.org/
– http://www.hitlabnz.org/wiki/buildAR/