SlideShare a Scribd company logo
AR TRACKING AND INTERACTION
COMP 4010 Lecture Four
Mark Billinghurst
August 17th 2021
mark.billinghurst@unisa.edu.au
REVIEW
Augmented Reality Definition
• Combines Real and Virtual Images
• Both can be seen at the same time
• Interactive in real-time
• The virtual content can be interacted with
• Registered in 3D
• Virtual objects appear fixed in space
Augmented RealityTechnology
• Combines Real and Virtual Images
• Needs: Display technology
• Interactive in real-time
• Needs: Input and interaction technology
• Registered in 3D
• Needs: Viewpoint tracking technology
Example: MagicLeap ML-1 AR Display
•Display
• Multi-layered Waveguide display
•Tracking
• Inside out SLAM tracking
•Input
• 6DOF wand, gesture input
AR Display Technologies
• Classification (Bimber/Raskar 2005)
• Head attached
• Head mounted display/projector
• Body attached
• Handheld display/projector
• Spatial
• Spatially aligned projector/monitor
Bimber, O., & Raskar, R. (2005). Spatial augmented reality: merging real and virtual worlds. CRC press.
DisplayTaxonomy
Types of Head Mounted Displays
Occluded
See-thru
Multiplexed
Optical see-through Head-Mounted Display
Virtual images
from monitors
Real
World
Optical
Combiners
Optical Design – Curved Mirror
▪ Reflect off free-space curved mirror
Video see-through HMD
Video
cameras
Monitors
Graphics
Combiner
Video
Example: Varjo XR-1
• Wide field of view
• 87 degrees
• High resolution
• 1920 x 1080 pixel/eye
• 1440 x 1600 pixel insert
• Low latency stereo cameras
• 2 x 12 megapixel
• < 20 ms delay
• Integrated Eye Tracking
Varjo XR-1 Image Quality
Multiplexed Display
Virtual Image ‘inset’ into Real World
Example:Google Glass
SpatialAugmented Reality
• Project onto irregular surfaces
• Geometric Registration
• Projector blending, High dynamic range
• Book: Bimber, Rasker “Spatial Augmented Reality”
Video MonitorAR
Video
cameras Monitor
Graphics Combiner
Video
Stereo
glasses
Magic Mirror AR Experience
• See AR overlay of an image of yourself
AR RequiresTracking and Registration
• Registration
• Positioning virtual object wrt real world
• Fixing virtual object on real object when view is fixed
• Calibration
• Offline measurements
• Measure camera relative to head mounted display
• Tracking
• Continually locating the user’s viewpoint when view moving
• Position (x,y,z), Orientation (r,p,y)
Sources of Registration Errors
•Static errors
• Optical distortions (in HMD)
• Mechanical misalignments
• Tracker errors
• Incorrect viewing parameters
•Dynamic errors
• System delays (largest source of error)
• 1 ms delay = 1/3 mm registration error
Dynamic errors
• Total Delay = 50 + 2 + 33 + 17 = 102 ms
• 1 ms delay = 1/3 mm = 33mm error
Tracking Calculate
Viewpoint
Simulation
Render
Scene
Draw to
Display
x,y,z
r,p,y
Application Loop
20 Hz = 50ms 500 Hz = 2ms 30 Hz = 33ms 60 Hz = 17ms
Reducing dynamic errors (1)
•Reduce system lag
•Faster components/system modules
•Reduce apparent lag
•Image deflection
•Image warping
Reducing dynamic errors (2)
• Match video + graphics input streams (video AR)
• Delay video of real world to match system lag
• User doesn’t notice
• Predictive Tracking
• Inertial sensors helpful
Azuma / Bishop 1994
Tracking Technologies
§ Active
• Mechanical, Magnetic, Ultrasonic
• GPS, Wifi, cell location
§ Passive
• Inertial sensors (compass, accelerometer, gyro)
• Computer Vision
• Marker based, Natural feature tracking
§ Hybrid Tracking
• Combined sensors (eg Vision + Inertial)
Tracking Types
Magnetic
Tracker
Inertial
Tracker
Ultrasonic
Tracker
Optical
Tracker
Marker-Based
Tracking
Markerless
Tracking
Specialize
dTracking
Edge-Based
Tracking
Template-
BasedTracking
Interest Point
Tracking
Mechanical
Tracker
OPTICAL TRACKING
https://www.youtube.com/watch?v=OtG-FNYhDv0
Why Optical Tracking for AR?
• Many AR devices have cameras
• Mobile phone/tablet, Video see-through display
• Provides precise alignment between video and AR overlay
• Using features in video to generate pixel perfect alignment
• Real world has many visual features that can be tracked from
• Computer Vision is a well established discipline
• Over 40 years of research to draw on
• Old non real time algorithms can be run in real time on todays devices
Common AR Optical Tracking Types
• Marker Tracking
• Tracking known artificial markers/images
• e.g. ARToolKit square markers
• Markerless Tracking
• Tracking from known features in real world
• e.g. Vuforia image tracking
• Unprepared Tracking
• Tracking in unknown environment
• e.g. SLAM tracking
Visual Tracking Approaches
• Marker based tracking with artificial features
• Make a model before tracking
• Model based tracking with natural features
• Acquire a model before tracking
• Simultaneous localization and mapping
• Build a model while tracking it
Marker Tracking
• Available for more than 20 years
• Several open-source solutions exist
• ARToolKit, ARTag, ATK+, etc
• Fairly simple to implement
• Standard computer vision methods
• A rectangle provides 4 corner points
• Enough for pose estimation!
Demo: ARToolKit
Key Problem: Finding Camera Position
• Need camera pose relative to marker to render AR graphics
Known image Image in Camera view Overlay AR content
Goal: Find Camera Pose
• Knowing:
• Position of key points in on-screen video image
• Camera properties (focal length, image distortion)
Coordinates for Marker Tracking
Coordinates for Marker Tracking
Marker Camera
•Final Goal
•Rotation & Translation
1: Camera Ideal Screen
•Perspective model
•Obtained from Camera Calibration
2: Ideal Screen Observed Screen
•Nonlinear function (barrel shape)
•Obtained from Camera Calibration
3: Marker Observed Screen
•Correspondence of 4 vertices
•Real time image processing
Marker Tracking – General Principle
1. Capturing image with known camera
2. Search for quadrilaterals
3. Pose estimation
from homography
4. Pose refinement
Minimize nonlinear
projection error
5. Use final pose
37
1
3
2
4
5
Image: Daniel Wagner
Marker Based Tracking: ARToolKit
https://github.com/artoolkit
MarkerTracking – Fiducial Detection
• Threshold the whole image to black and white
• Search scanline by scanline for edges (white to black)
• Follow edge until either
• Back to starting pixel
• Image border
• Check for size
• Reject fiducials early that are too small (or too large)
MarkerTracking – Rectangle Fitting
• Start with an arbitrary point “x” on the contour
• The point with maximum distance must be a corner c0
• Create a diagonal through the center
• Find points c1 & c2 with maximum distance left and right of diag.
• New diagonal from c1 to c2
• Find point c3 right of diagonal with maximum distance
MarkerTracking – Pattern checking
• Calculate homography using the 4 corner points
• “Direct Linear Transform” algorithm
• Maps normalized coordinates to marker coordinates
(simple perspective projection, no camera model)
• Extract pattern by sampling and check
• Id (implicit encoding)
• Template (normalized cross correlation)
Marker tracking – Pose estimation
• Calculates marker pose relative to the camera
• Initial estimation directly from homography
• Very fast, but coarse with error
• Jitters a lot…
• Iterative Refinement using Gauss-Newton method
• 6 parameters (3 for position, 3 for rotation) to refine
• At each iteration we optimize on the error
• Iterate
Outcome: Camera Transform
• Transformation from Marker to Camera
• Rotation and Translation
TCM : 4x4 transformation matrix
from marker coord. to camera coord.
Tracking challenges inARToolKit
False positives and inter-marker confusion
(image by M. Fiala)
Image noise
(e.g. poor lens, block
coding /
compression, neon tube)
Unfocused camera,
motion blur
Dark/unevenly lit
scene, vignetting
Jittering
(Photoshop illustration)
Occlusion
(image by M. Fiala)
Other MarkerTracking Libraries
But - You can’t cover world with ARToolKit Markers!
Markerless Tracking
Magnetic
Tracker
Inertial
Tracker
Ultrasonic
Tracker
Optical
Tracker
Marker-Based
Tracking
Markerless
Tracking
Specialized
Tracking
Edge-Based
Tracking
Template-Based
Tracking
Interest Point
Tracking
• No more Markers! èMarkerless Tracking
Mechanica
l Tracker
https://www.youtube.com/watch?v=ANEB-DhuTSA
Visual Tracking Approaches
• Marker based tracking with artificial features
• Make a model before tracking
• Model based tracking with natural features
• Acquire a model before tracking
• Simultaneous localization and mapping
• Build a model while tracking it
Natural Feature Tracking
• Use Natural Cues of Real Elements
• Edges
• Surface Texture
• Interest Points
• Model or Model-Free
• No visual pollution
Contours
Features Points
Surfaces
Natural Features
• Detect salient interest points in image
• Must be easily found
• Location in image should remain stable
when viewpoint changes
• Requires textured surfaces
• Alternative: can use edge features (less discriminative)
• Match interest points to tracking model database
• Database filled with results of 3D reconstruction
• Matching entire (sub-)images is too costly
• Typically interest points are compiled into “descriptors”
Tracking 51
Image: Gerhard Reitmayr
Image: Martin Hirzer
TextureTracking
Demo: Vuforia Texture Tracking
https://www.youtube.com/watch?v=1Qf5Qew5zSU
Tracking by Keypoint Detection
• This is what most trackers do…
• Targets are detected every frame
• Popular because tracking and detection
are solved simultaneously
Keypoint detection
Descriptor creation
and matching
Outlier Removal
Pose estimation
and refinement
Camera Image
Pose
Recognition
Detection and Tracking
Detection
Incremental
tracking
Tracking target
detected
Tracking target
lost
Tracking target
not detected
Incremental
tracking ok
Start
+ Recognize target type
+ Detect target
+ Initialize camera pose
+ Fast
+ Robust to blur, lighting changes
+ Robust to tilt
• Tracking and detection are complementary approaches.
• After successful detection, the target is tracked incrementally.
• If the target is lost, the detection is activated again
What is a Keypoint?
• It depends on the detector you use!
• For high performance use the FAST corner detector
• Apply FAST to all pixels of your image
• Obtain a set of keypoints for your image
• Describe the keypoints
Rosten, E., & Drummond, T. (2006, May). Machine learning for high-speed corner detection.
In European conference on computer vision (pp. 430-443). Springer Berlin Heidelberg.
FAST Corner Keypoint Detection
Example:FAST Corner Detection
https://www.youtube.com/watch?v=fevfxfHnpeY
Descriptors
• Describe the Keypoint features
• Can use SIFT
• Estimate the dominant keypoint
orientation using gradients
• Compensate for detected
orientation
• Describe the keypoints in terms
of the gradients surrounding it
Wagner D., Reitmayr G., Mulloni A., Drummond T., Schmalstieg D.,
Real-Time Detection and Tracking for Augmented Reality on Mobile Phones.
IEEE Transactions on Visualization and Computer Graphics, May/June, 2010
Database Creation
• Offline step – create database of known features
• Searching for corners in a static image
• For robustness look at corners on multiple scales
• Some corners are more descriptive at larger or smaller scales
• We don’t know how far users will be from our image
• Build a database file with all descriptors and their
position on the original image
Real-time Tracking
• Search for known keypoints in the video
• Create the descriptors
• Match the descriptors from the
live video against those in the database
• Brute force is not an option
• Need the speed-up of special data structures
Keypoint detection
Descriptor creation
and matching
Outlier Removal
Pose estimation
and refinement
Camera Image
Pose
Recognition
NFT – Outlier removal
• Removing outlier features
• Several removal techniques
• Simple geometric tests
• Is the keypoint rotation invariant?
• Do keypoints remain relative to each other?
• Homography-based tests
Rotation Invariant
NFT – Pose refinement
• Pose from homography makes good
starting point
• Use Gauss-Newton iteration
• Try to minimize the re-projection error
of the keypoints
• Typically, 2-4 iterations are enough..
NFT – Real-time tracking
• Search for keypoints in the video image
• Create the descriptors
• Match the descriptors from the
live video against those in the database
• Remove the keypoints that are outliers
• Use the remaining keypoints
to calculate the pose of the camera
Keypoint detection
Descriptor creation
and matching
Outlier Removal
Pose estimation
and refinement
Camera Image
Pose
Recognition
Example
Target Image Feature Detection AR Overlay
https://www.youtube.com/watch?v=O8XH6ORpBls
Edge Based Tracking
• Example: RAPiD [Drummond et al. 02]
• Initialization, Control Points, Pose Prediction (Global Method)
Demo: Edge Based Tracking
Line Based Tracking
• Visual Servoing [Comport et al. 2004]
3D Model BasedTracking
• Tracking from 3D object shape
• Align detected features to 3D object model
• Examples
• SnapChat Face tracking
• Mechanical part tracking
• Vehicle tracking
• Etc..
Typical Model Based Tracking Algorithm
Example: Vuforia Model Tracker
• Uses pre-captured 3D model for tracking
• On-screen guide to line up model
Model Tracking Demo
https://www.youtube.com/watch?v=6W7_ZssUTDQ
Taxonomy of Model Based Tracking
Lowney, M., & Raj, A. S. (2016). Model based tracking for augmented reality on mobile devices.
Marker vs.Natural FeatureTracking
• Marker tracking
• Usually requires no database to be stored
• Markers can be an eye-catcher
• Tracking is less demanding
• The environment must be instrumented
• Markers usually work only when fully in view
• Natural feature tracking
• A database of keypoints must be stored/downloaded
• Natural feature targets might catch the attention less
• Natural feature targets are potentially everywhere
• Natural feature targets work also if partially in view
Visual Tracking Approaches
• Marker based tracking with artificial features
• Make a model before tracking
• Model based tracking with natural features
• Acquire a model before tracking
• Simultaneous localization and mapping
• Build a model while tracking it
https://www.youtube.com/watch?v=uQeOYi3Be5Y
Tracking from an Unknown Environment
• What to do when you don’t know any features?
• Very important problem in mobile robotics - Where am I?
• SLAM
• Simultaneously Localize And Map the environment
• Goal: to recover both camera pose and map structure
while initially knowing neither.
• Mapping:
• Building a map of the environment which the robot is in
• Localisation:
• Navigating this environment using the map while keeping
track of the robot’s relative position and orientation
Parallel Tracking and Mapping
Tracking Mapping
New keyframes
Map updates
+ Estimate camera pose
+ For every frame
+ Extend map
+ Improve map
+ Slow updates rate
Parallel tracking and mapping uses two
concurrent threads, one for tracking and one
for mapping, which run at different speeds
Parallel Tracking and Mapping
Video stream
New frames
Map updates
Tracking Mapping
Tracked local pose
FAST SLOW
Simultaneous
localization and mapping
(SLAM)
in small workspaces
Klein/Drummond, U.
Cambridge
Visual SLAM
• Early SLAM systems (1986 - )
• Computer visions and sensors (e.g. IMU, laser, etc.)
• One of the most important algorithms in Robotics
• Visual SLAM
• Using cameras only, such as stereo view
• MonoSLAM (single camera) developed in 2007 (Davidson)
Example:Kudan MonoSLAM
How SLAMWorks
• Three main steps
1. Tracking a set of points through successive camera frames
2. Using these tracks to triangulate their 3D position
3. Simultaneously use the estimated point locations to calculate
the camera pose which could have observed them
• By observing a sufficient number of points can solve for both
structure and motion (camera path and scene structure).
Evolution of SLAM Systems
• MonoSLAM (Davidson, 2007)
• Real time SLAM from single camera
• PTAM (Klein, 2009)
• First SLAM implementation on mobile phone
• FAB-MAP (Cummins, 2008)
• Probabilistic Localization and Mapping
• DTAM (Newcombe, 2011)
• 3D surface reconstruction from every pixel in image
• KinectFusion (Izadi, 2011)
• Realtime dense surface mapping and tracking using RGB-D
Demo:MonoSLAM
LSD-SLAM (Engel 2014)
• A novel, direct monocular SLAM technique
• Uses image intensities both for tracking and mapping.
• The camera is tracked using direct image alignment, while
• Geometry is estimated as semi-dense depth maps
• Supports very large-scale tracking
• Runs in real time on CPU and smartphone
Demo:LSD-SLAM
Direct Method vs. Feature Based
• Direct uses all information in image, cf feature based approach
that only use small patches around corners and edges
Applications of SLAM Systems
• Many possible applications
• Augmented Reality camera tracking
• Mobile robot localisation
• Real world navigation aid
• 3D scene reconstruction
• 3D Object reconstruction
• Etc..
• Assumptions
• Camera moves through an unchanging scene
• So not suitable for person tracking, gesture recognition
• Both involve non-rigidly deforming objects and a non-static map
Hybrid Tracking Interfaces
• Combine multiple tracking technologies together
• Active-Passive: Magnetic, Vision
• Active-Inertial: Vison, inertial
• Passoive-Inertial: Compass, inertial
Combining Sensors andVision
• Sensors
• Produces noisy output (= jittering augmentations)
• Are not sufficiently accurate (= wrongly placed augmentations)
• Gives us first information on where we are in the world,
and what we are looking at
• Vision
• Is more accurate (= stable and correct augmentations)
• Requires choosing the correct keypoint database to track from
• Requires registering our local coordinate frame (online-
generated model) to the global one (world)
OutdoorARTracking System
You, Neumann,Azuma outdoor AR system (1999)
Types of Sensor Fusion
• Complementary
• Combining sensors with different degrees of freedom
• Sensors must be synchronized (or requires inter-/extrapolation)
• E.g., combine position-only and orientation-only sensor
• E.g., orthogonal 1D sensors in gyro or magnetometer are complementary
• Competitive
• Different sensor types measure the same degree of freedom
• Redundant sensor fusion
• Use worse sensor only if better sensor is unavailable
• E.g., GPS + pedometer
• Statistical sensor fusion
www.augmentedrealitybook.org Tracking 93
Example: Outdoor Hybrid Tracking
• Combines
• computer vision
• inertial gyroscope sensors
• Both correct for each other
• Inertial gyro
• provides frame to frame prediction of camera
orientation, fast sensing
• drifts over time
• Computer vision
• Natural feature tracking, corrects for gyro drift
• Slower, less accurate
Robust OutdoorTracking
• HybridTracking
• ComputerVision, GPS, inertial
• Going Out
• Reitmayr & Drummond (Univ. Cambridge)
Reitmayr, G., & Drummond, T. W. (2006). Going out: robust model-based tracking for outdoor augmented reaity. In Mixed and
Augmented Reality, 2006. ISMAR 2006. IEEE/ACM International Symposium on (pp. 109-118). IEEE.
Handheld Display
Demo: Going Out Hybrid Tracking
ARKit – Visual Inertial Odometry
• Uses both computer vision + inertial sensing
• Tracking position twice
• Computer Vision – feature tracking, 2D plane tracking
• Inertial sensing – using the phone IMU
• Output combined via Kalman filter
• Determine which output is most accurate
• Pass pose to ARKit SDK
• Each system compliments the other
• Computer vision – needs visual features
• IMU - drifts over time, doesn’t need features
ARKit –Visual Inertial Odometry
• Slow camera
• Fast IMU
• If camera drops out IMU takes over
• Camera corrects IMU errors
ARKit Demo
• https://www.youtube.com/watch?v=dMEWp45WAUg
Conclusions
• Tracking and Registration are key problems
• Registration error
• Measures against static error
• Measures against dynamic error
• AR typically requires multiple tracking technologies
• Computer vision most popular
• Research Areas:
• SLAM systems, Deformable models, Mobile outdoor tracking
More Information
Fua, P., & Lepetit, V. (2007). Vision based 3D tracking
and pose estimation for mixed reality. In Emerging
technologies of augmented reality: Interfaces and
design (pp. 1-22). IGI Global.
3: AR INTERACTION
Augmented Reality technology
• Combines Real and Virtual Images
• Needs: Display technology
• Interactive in real-time
• Needs: Input and interaction technology
• Registered in 3D
• Needs: Viewpoint tracking technology
How Do You Design an Interface for This?
AR Interaction
• Designing AR Systems = Interface Design
• Using different input and output technologies
• Objective is a high quality of user experience
• Ease of use and learning
• Performance and satisfaction
Typical Interface Design Path
1/ Prototype Demonstration
2/ Adoption of Interaction Techniques from
other interface metaphors
3/ Development of new interface metaphors
appropriate to the medium
4/ Development of formal theoretical models
for predicting and modeling user actions
Desktop WIMP
Virtual Reality
Augmented Reality
Interacting with AR Content
• You can see spatially registered AR..
how can you interact with it?
Different Types of AR Interaction
• Browsing Interfaces
• simple (conceptually!), unobtrusive
• 3D AR Interfaces
• expressive, creative, require attention
• Tangible Interfaces
• Embedded into conventional environments
• Tangible AR
• Combines TUI input + AR display
AR Interfaces as Data Browsers
• 2D/3D virtual objects are
registered in 3D
• “VR in Real World”
• Interaction
• 2D/3D virtual viewpoint control
• Applications
• Visualization, training
AR Information Browsers
• Information is registered
to
real-world context
• Hand held AR displays
• Interaction
• Manipulation of a window
into information space
• Applications
• Context-aware information
displays
Rekimoto, et al. 1997
NaviCam Demo (1997)
Navicam Architecture
Current AR Information Browsers
• Mobile AR
• GPS + compass
• Many Applications
• Wikitude
• Yelp
• Google maps
• …
Example: Google Maps AR Mode
• AR Navigation Aid
• GPS + compass, 2D/3D object placement
Advantages and Disadvantages
• Important class of AR interfaces
• Wearable computers
• AR simulation, training
• Limited interactivity
• Modification of virtual
content is difficult
Rekimoto, et al. 1997
3D AR Interfaces
• Virtual objects displayed in 3D
physical space and manipulated
• HMDs and 6DOF head-tracking
• 6DOF hand trackers for input
• Interaction
• Viewpoint control
• Traditional 3D user interface
interaction: manipulation, selection,
etc.
Kiyokawa, et al. 2000
AR 3D Interaction (2000)
Example: AR Graffiti
www.nextwall.net
Advantages and Disadvantages
• Important class of AR interfaces
• Entertainment, design, training
• Advantages
• User can interact with 3D virtual
object everywhere in space
• Natural, familiar interaction
• Disadvantages
• Usually no tactile feedback
• User has to use different devices for
virtual and physical objects
Oshima, et al. 2000
www.empathiccomputing.org
@marknb00
mark.billinghurst@unisa.edu.au

More Related Content

What's hot

2022 COMP4010 Lecture3: AR Technology
2022 COMP4010 Lecture3: AR Technology2022 COMP4010 Lecture3: AR Technology
2022 COMP4010 Lecture3: AR Technology
Mark Billinghurst
 
2022 COMP4010 Lecture2: Perception
2022 COMP4010 Lecture2: Perception2022 COMP4010 Lecture2: Perception
2022 COMP4010 Lecture2: Perception
Mark Billinghurst
 
Comp4010 lecture11 VR Applications
Comp4010 lecture11 VR ApplicationsComp4010 lecture11 VR Applications
Comp4010 lecture11 VR Applications
Mark Billinghurst
 
Comp4010 Lecture9 VR Input and Systems
Comp4010 Lecture9 VR Input and SystemsComp4010 Lecture9 VR Input and Systems
Comp4010 Lecture9 VR Input and Systems
Mark Billinghurst
 
Advanced Methods for User Evaluation in AR/VR Studies
Advanced Methods for User Evaluation in AR/VR StudiesAdvanced Methods for User Evaluation in AR/VR Studies
Advanced Methods for User Evaluation in AR/VR Studies
Mark Billinghurst
 
Natural Interfaces for Augmented Reality
Natural Interfaces for Augmented RealityNatural Interfaces for Augmented Reality
Natural Interfaces for Augmented Reality
Mark Billinghurst
 
2022 COMP4010 Lecture 6: Designing AR Systems
2022 COMP4010 Lecture 6: Designing AR Systems2022 COMP4010 Lecture 6: Designing AR Systems
2022 COMP4010 Lecture 6: Designing AR Systems
Mark Billinghurst
 
COMP 4010 - Lecture4 VR Technology - Visual and Haptic Displays
COMP 4010 - Lecture4 VR Technology - Visual and Haptic DisplaysCOMP 4010 - Lecture4 VR Technology - Visual and Haptic Displays
COMP 4010 - Lecture4 VR Technology - Visual and Haptic Displays
Mark Billinghurst
 
2022 COMP4010 Lecture1: Introduction to XR
2022 COMP4010 Lecture1: Introduction to XR2022 COMP4010 Lecture1: Introduction to XR
2022 COMP4010 Lecture1: Introduction to XR
Mark Billinghurst
 
Mixed Reality in the Workspace
Mixed Reality in the WorkspaceMixed Reality in the Workspace
Mixed Reality in the Workspace
Mark Billinghurst
 
Comp 4010 2021 Snap Tutorial 2
Comp 4010 2021 Snap Tutorial 2Comp 4010 2021 Snap Tutorial 2
Comp 4010 2021 Snap Tutorial 2
Mark Billinghurst
 
Application in Augmented and Virtual Reality
Application in Augmented and Virtual RealityApplication in Augmented and Virtual Reality
Application in Augmented and Virtual Reality
Mark Billinghurst
 
Comp4010 Lecture8 Introduction to VR
Comp4010 Lecture8 Introduction to VRComp4010 Lecture8 Introduction to VR
Comp4010 Lecture8 Introduction to VR
Mark Billinghurst
 
COMP 4010 - Lecture 3 VR Systems
COMP 4010 - Lecture 3 VR SystemsCOMP 4010 - Lecture 3 VR Systems
COMP 4010 - Lecture 3 VR Systems
Mark Billinghurst
 
2022 COMP4010 Lecture5: AR Prototyping
2022 COMP4010 Lecture5: AR Prototyping2022 COMP4010 Lecture5: AR Prototyping
2022 COMP4010 Lecture5: AR Prototyping
Mark Billinghurst
 
Research Directions in Transitional Interfaces
Research Directions in Transitional InterfacesResearch Directions in Transitional Interfaces
Research Directions in Transitional Interfaces
Mark Billinghurst
 
Comp4010 Lecture7 Designing AR Systems
Comp4010 Lecture7 Designing AR SystemsComp4010 Lecture7 Designing AR Systems
Comp4010 Lecture7 Designing AR Systems
Mark Billinghurst
 
Novel Interfaces for AR Systems
Novel Interfaces for AR SystemsNovel Interfaces for AR Systems
Novel Interfaces for AR Systems
Mark Billinghurst
 
2013 Lecture3: AR Tracking
2013 Lecture3: AR Tracking 2013 Lecture3: AR Tracking
2013 Lecture3: AR Tracking
Mark Billinghurst
 
Grand Challenges for Mixed Reality
Grand Challenges for Mixed Reality Grand Challenges for Mixed Reality
Grand Challenges for Mixed Reality
Mark Billinghurst
 

What's hot (20)

2022 COMP4010 Lecture3: AR Technology
2022 COMP4010 Lecture3: AR Technology2022 COMP4010 Lecture3: AR Technology
2022 COMP4010 Lecture3: AR Technology
 
2022 COMP4010 Lecture2: Perception
2022 COMP4010 Lecture2: Perception2022 COMP4010 Lecture2: Perception
2022 COMP4010 Lecture2: Perception
 
Comp4010 lecture11 VR Applications
Comp4010 lecture11 VR ApplicationsComp4010 lecture11 VR Applications
Comp4010 lecture11 VR Applications
 
Comp4010 Lecture9 VR Input and Systems
Comp4010 Lecture9 VR Input and SystemsComp4010 Lecture9 VR Input and Systems
Comp4010 Lecture9 VR Input and Systems
 
Advanced Methods for User Evaluation in AR/VR Studies
Advanced Methods for User Evaluation in AR/VR StudiesAdvanced Methods for User Evaluation in AR/VR Studies
Advanced Methods for User Evaluation in AR/VR Studies
 
Natural Interfaces for Augmented Reality
Natural Interfaces for Augmented RealityNatural Interfaces for Augmented Reality
Natural Interfaces for Augmented Reality
 
2022 COMP4010 Lecture 6: Designing AR Systems
2022 COMP4010 Lecture 6: Designing AR Systems2022 COMP4010 Lecture 6: Designing AR Systems
2022 COMP4010 Lecture 6: Designing AR Systems
 
COMP 4010 - Lecture4 VR Technology - Visual and Haptic Displays
COMP 4010 - Lecture4 VR Technology - Visual and Haptic DisplaysCOMP 4010 - Lecture4 VR Technology - Visual and Haptic Displays
COMP 4010 - Lecture4 VR Technology - Visual and Haptic Displays
 
2022 COMP4010 Lecture1: Introduction to XR
2022 COMP4010 Lecture1: Introduction to XR2022 COMP4010 Lecture1: Introduction to XR
2022 COMP4010 Lecture1: Introduction to XR
 
Mixed Reality in the Workspace
Mixed Reality in the WorkspaceMixed Reality in the Workspace
Mixed Reality in the Workspace
 
Comp 4010 2021 Snap Tutorial 2
Comp 4010 2021 Snap Tutorial 2Comp 4010 2021 Snap Tutorial 2
Comp 4010 2021 Snap Tutorial 2
 
Application in Augmented and Virtual Reality
Application in Augmented and Virtual RealityApplication in Augmented and Virtual Reality
Application in Augmented and Virtual Reality
 
Comp4010 Lecture8 Introduction to VR
Comp4010 Lecture8 Introduction to VRComp4010 Lecture8 Introduction to VR
Comp4010 Lecture8 Introduction to VR
 
COMP 4010 - Lecture 3 VR Systems
COMP 4010 - Lecture 3 VR SystemsCOMP 4010 - Lecture 3 VR Systems
COMP 4010 - Lecture 3 VR Systems
 
2022 COMP4010 Lecture5: AR Prototyping
2022 COMP4010 Lecture5: AR Prototyping2022 COMP4010 Lecture5: AR Prototyping
2022 COMP4010 Lecture5: AR Prototyping
 
Research Directions in Transitional Interfaces
Research Directions in Transitional InterfacesResearch Directions in Transitional Interfaces
Research Directions in Transitional Interfaces
 
Comp4010 Lecture7 Designing AR Systems
Comp4010 Lecture7 Designing AR SystemsComp4010 Lecture7 Designing AR Systems
Comp4010 Lecture7 Designing AR Systems
 
Novel Interfaces for AR Systems
Novel Interfaces for AR SystemsNovel Interfaces for AR Systems
Novel Interfaces for AR Systems
 
2013 Lecture3: AR Tracking
2013 Lecture3: AR Tracking 2013 Lecture3: AR Tracking
2013 Lecture3: AR Tracking
 
Grand Challenges for Mixed Reality
Grand Challenges for Mixed Reality Grand Challenges for Mixed Reality
Grand Challenges for Mixed Reality
 

Similar to Comp4010 Lecture4 AR Tracking and Interaction

Lecture 4: VR Systems
Lecture 4: VR SystemsLecture 4: VR Systems
Lecture 4: VR Systems
Mark Billinghurst
 
Mobile Augmented Reality
Mobile Augmented RealityMobile Augmented Reality
Mobile Augmented Reality
Marios Bikos
 
Mobile AR Lecture 10 - Research Directions
Mobile AR Lecture 10 - Research DirectionsMobile AR Lecture 10 - Research Directions
Mobile AR Lecture 10 - Research Directions
Mark Billinghurst
 
Overview of Computer Vision For Footwear Industry
Overview of Computer Vision For Footwear IndustryOverview of Computer Vision For Footwear Industry
Overview of Computer Vision For Footwear Industry
Tanvir Moin
 
2016 AR Summer School Lecture2
2016 AR Summer School Lecture22016 AR Summer School Lecture2
2016 AR Summer School Lecture2
Mark Billinghurst
 
Mobile AR Lecture 2 - Technology
Mobile AR Lecture 2 - TechnologyMobile AR Lecture 2 - Technology
Mobile AR Lecture 2 - Technology
Mark Billinghurst
 
pick and place robotic arm
pick and place robotic armpick and place robotic arm
pick and place robotic arm
ANJANA ANILKUMAR
 
Mainprojpresentation 150617092611-lva1-app6892
Mainprojpresentation 150617092611-lva1-app6892Mainprojpresentation 150617092611-lva1-app6892
Mainprojpresentation 150617092611-lva1-app6892
ANJANA ANILKUMAR
 
ICS1020CV_2022.pdf
ICS1020CV_2022.pdfICS1020CV_2022.pdf
ICS1020CV_2022.pdf
Vanessa Camilleri
 
Europa Presentation 2011
Europa Presentation 2011Europa Presentation 2011
Europa Presentation 2011Chris Churchill
 
COMP 4010: Lecture8 - AR Technology
COMP 4010: Lecture8 - AR TechnologyCOMP 4010: Lecture8 - AR Technology
COMP 4010: Lecture8 - AR Technology
Mark Billinghurst
 
PPT s01-machine vision-s2
PPT s01-machine vision-s2PPT s01-machine vision-s2
PPT s01-machine vision-s2
Binus Online Learning
 
Intelligente visie maakt drones autonoom
Intelligente visie maakt drones autonoomIntelligente visie maakt drones autonoom
Intelligente visie maakt drones autonoom
EUKA
 
What is computer vision?
What is computer vision?What is computer vision?
What is computer vision?
Qentinel
 
Computer-Vision based Centralized Multi-agent System on Matlab and Arduino Du...
Computer-Vision based Centralized Multi-agent System on Matlab and Arduino Du...Computer-Vision based Centralized Multi-agent System on Matlab and Arduino Du...
Computer-Vision based Centralized Multi-agent System on Matlab and Arduino Du...
Aritra Sarkar
 
How to easily improve quality using automated visual inspection
How to easily improve quality using automated visual inspectionHow to easily improve quality using automated visual inspection
How to easily improve quality using automated visual inspection
Design World
 
Comp4010 Lecture10 VR Interface Design
Comp4010 Lecture10 VR Interface DesignComp4010 Lecture10 VR Interface Design
Comp4010 Lecture10 VR Interface Design
Mark Billinghurst
 
PPT s06-machine vision-s2
PPT s06-machine vision-s2PPT s06-machine vision-s2
PPT s06-machine vision-s2
Binus Online Learning
 

Similar to Comp4010 Lecture4 AR Tracking and Interaction (20)

Lecture 4: VR Systems
Lecture 4: VR SystemsLecture 4: VR Systems
Lecture 4: VR Systems
 
Mobile Augmented Reality
Mobile Augmented RealityMobile Augmented Reality
Mobile Augmented Reality
 
Mobile AR Lecture 10 - Research Directions
Mobile AR Lecture 10 - Research DirectionsMobile AR Lecture 10 - Research Directions
Mobile AR Lecture 10 - Research Directions
 
Overview of Computer Vision For Footwear Industry
Overview of Computer Vision For Footwear IndustryOverview of Computer Vision For Footwear Industry
Overview of Computer Vision For Footwear Industry
 
2016 AR Summer School Lecture2
2016 AR Summer School Lecture22016 AR Summer School Lecture2
2016 AR Summer School Lecture2
 
Mobile AR Lecture 2 - Technology
Mobile AR Lecture 2 - TechnologyMobile AR Lecture 2 - Technology
Mobile AR Lecture 2 - Technology
 
pick and place robotic arm
pick and place robotic armpick and place robotic arm
pick and place robotic arm
 
Mainprojpresentation 150617092611-lva1-app6892
Mainprojpresentation 150617092611-lva1-app6892Mainprojpresentation 150617092611-lva1-app6892
Mainprojpresentation 150617092611-lva1-app6892
 
ICS1020CV_2022.pdf
ICS1020CV_2022.pdfICS1020CV_2022.pdf
ICS1020CV_2022.pdf
 
Europa Presentation 2011
Europa Presentation 2011Europa Presentation 2011
Europa Presentation 2011
 
COMP 4010: Lecture8 - AR Technology
COMP 4010: Lecture8 - AR TechnologyCOMP 4010: Lecture8 - AR Technology
COMP 4010: Lecture8 - AR Technology
 
PPT s01-machine vision-s2
PPT s01-machine vision-s2PPT s01-machine vision-s2
PPT s01-machine vision-s2
 
Intelligente visie maakt drones autonoom
Intelligente visie maakt drones autonoomIntelligente visie maakt drones autonoom
Intelligente visie maakt drones autonoom
 
FinalPoster
FinalPosterFinalPoster
FinalPoster
 
What is computer vision?
What is computer vision?What is computer vision?
What is computer vision?
 
Computer-Vision based Centralized Multi-agent System on Matlab and Arduino Du...
Computer-Vision based Centralized Multi-agent System on Matlab and Arduino Du...Computer-Vision based Centralized Multi-agent System on Matlab and Arduino Du...
Computer-Vision based Centralized Multi-agent System on Matlab and Arduino Du...
 
Seminar_1118.pptx
Seminar_1118.pptxSeminar_1118.pptx
Seminar_1118.pptx
 
How to easily improve quality using automated visual inspection
How to easily improve quality using automated visual inspectionHow to easily improve quality using automated visual inspection
How to easily improve quality using automated visual inspection
 
Comp4010 Lecture10 VR Interface Design
Comp4010 Lecture10 VR Interface DesignComp4010 Lecture10 VR Interface Design
Comp4010 Lecture10 VR Interface Design
 
PPT s06-machine vision-s2
PPT s06-machine vision-s2PPT s06-machine vision-s2
PPT s06-machine vision-s2
 

More from Mark Billinghurst

The Metaverse: Are We There Yet?
The  Metaverse:    Are   We  There  Yet?The  Metaverse:    Are   We  There  Yet?
The Metaverse: Are We There Yet?
Mark Billinghurst
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR Systems
Mark Billinghurst
 
IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024
Mark Billinghurst
 
Future Research Directions for Augmented Reality
Future Research Directions for Augmented RealityFuture Research Directions for Augmented Reality
Future Research Directions for Augmented Reality
Mark Billinghurst
 
Evaluation Methods for Social XR Experiences
Evaluation Methods for Social XR ExperiencesEvaluation Methods for Social XR Experiences
Evaluation Methods for Social XR Experiences
Mark Billinghurst
 
Empathic Computing: Delivering the Potential of the Metaverse
Empathic Computing: Delivering  the Potential of the MetaverseEmpathic Computing: Delivering  the Potential of the Metaverse
Empathic Computing: Delivering the Potential of the Metaverse
Mark Billinghurst
 
Empathic Computing: Capturing the Potential of the Metaverse
Empathic Computing: Capturing the Potential of the MetaverseEmpathic Computing: Capturing the Potential of the Metaverse
Empathic Computing: Capturing the Potential of the Metaverse
Mark Billinghurst
 
Talk to Me: Using Virtual Avatars to Improve Remote Collaboration
Talk to Me: Using Virtual Avatars to Improve Remote CollaborationTalk to Me: Using Virtual Avatars to Improve Remote Collaboration
Talk to Me: Using Virtual Avatars to Improve Remote Collaboration
Mark Billinghurst
 
Empathic Computing: Designing for the Broader Metaverse
Empathic Computing: Designing for the Broader MetaverseEmpathic Computing: Designing for the Broader Metaverse
Empathic Computing: Designing for the Broader Metaverse
Mark Billinghurst
 
2022 COMP 4010 Lecture 7: Introduction to VR
2022 COMP 4010 Lecture 7: Introduction to VR2022 COMP 4010 Lecture 7: Introduction to VR
2022 COMP 4010 Lecture 7: Introduction to VR
Mark Billinghurst
 
ISS2022 Keynote
ISS2022 KeynoteISS2022 Keynote
ISS2022 Keynote
Mark Billinghurst
 
Empathic Computing and Collaborative Immersive Analytics
Empathic Computing and Collaborative Immersive AnalyticsEmpathic Computing and Collaborative Immersive Analytics
Empathic Computing and Collaborative Immersive Analytics
Mark Billinghurst
 
Metaverse Learning
Metaverse LearningMetaverse Learning
Metaverse Learning
Mark Billinghurst
 
Empathic Computing: Developing for the Whole Metaverse
Empathic Computing: Developing for the Whole MetaverseEmpathic Computing: Developing for the Whole Metaverse
Empathic Computing: Developing for the Whole Metaverse
Mark Billinghurst
 
Comp4010 Lecture13 More Research Directions
Comp4010 Lecture13 More Research DirectionsComp4010 Lecture13 More Research Directions
Comp4010 Lecture13 More Research Directions
Mark Billinghurst
 
Comp4010 lecture11 VR Applications
Comp4010 lecture11 VR ApplicationsComp4010 lecture11 VR Applications
Comp4010 lecture11 VR Applications
Mark Billinghurst
 
Advanced Methods for User Evaluation in Enterprise AR
Advanced Methods for User Evaluation in Enterprise ARAdvanced Methods for User Evaluation in Enterprise AR
Advanced Methods for User Evaluation in Enterprise AR
Mark Billinghurst
 

More from Mark Billinghurst (17)

The Metaverse: Are We There Yet?
The  Metaverse:    Are   We  There  Yet?The  Metaverse:    Are   We  There  Yet?
The Metaverse: Are We There Yet?
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR Systems
 
IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024
 
Future Research Directions for Augmented Reality
Future Research Directions for Augmented RealityFuture Research Directions for Augmented Reality
Future Research Directions for Augmented Reality
 
Evaluation Methods for Social XR Experiences
Evaluation Methods for Social XR ExperiencesEvaluation Methods for Social XR Experiences
Evaluation Methods for Social XR Experiences
 
Empathic Computing: Delivering the Potential of the Metaverse
Empathic Computing: Delivering  the Potential of the MetaverseEmpathic Computing: Delivering  the Potential of the Metaverse
Empathic Computing: Delivering the Potential of the Metaverse
 
Empathic Computing: Capturing the Potential of the Metaverse
Empathic Computing: Capturing the Potential of the MetaverseEmpathic Computing: Capturing the Potential of the Metaverse
Empathic Computing: Capturing the Potential of the Metaverse
 
Talk to Me: Using Virtual Avatars to Improve Remote Collaboration
Talk to Me: Using Virtual Avatars to Improve Remote CollaborationTalk to Me: Using Virtual Avatars to Improve Remote Collaboration
Talk to Me: Using Virtual Avatars to Improve Remote Collaboration
 
Empathic Computing: Designing for the Broader Metaverse
Empathic Computing: Designing for the Broader MetaverseEmpathic Computing: Designing for the Broader Metaverse
Empathic Computing: Designing for the Broader Metaverse
 
2022 COMP 4010 Lecture 7: Introduction to VR
2022 COMP 4010 Lecture 7: Introduction to VR2022 COMP 4010 Lecture 7: Introduction to VR
2022 COMP 4010 Lecture 7: Introduction to VR
 
ISS2022 Keynote
ISS2022 KeynoteISS2022 Keynote
ISS2022 Keynote
 
Empathic Computing and Collaborative Immersive Analytics
Empathic Computing and Collaborative Immersive AnalyticsEmpathic Computing and Collaborative Immersive Analytics
Empathic Computing and Collaborative Immersive Analytics
 
Metaverse Learning
Metaverse LearningMetaverse Learning
Metaverse Learning
 
Empathic Computing: Developing for the Whole Metaverse
Empathic Computing: Developing for the Whole MetaverseEmpathic Computing: Developing for the Whole Metaverse
Empathic Computing: Developing for the Whole Metaverse
 
Comp4010 Lecture13 More Research Directions
Comp4010 Lecture13 More Research DirectionsComp4010 Lecture13 More Research Directions
Comp4010 Lecture13 More Research Directions
 
Comp4010 lecture11 VR Applications
Comp4010 lecture11 VR ApplicationsComp4010 lecture11 VR Applications
Comp4010 lecture11 VR Applications
 
Advanced Methods for User Evaluation in Enterprise AR
Advanced Methods for User Evaluation in Enterprise ARAdvanced Methods for User Evaluation in Enterprise AR
Advanced Methods for User Evaluation in Enterprise AR
 

Recently uploaded

FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdfFIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance
 
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 previewState of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
Prayukth K V
 
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
DanBrown980551
 
Elevating Tactical DDD Patterns Through Object Calisthenics
Elevating Tactical DDD Patterns Through Object CalisthenicsElevating Tactical DDD Patterns Through Object Calisthenics
Elevating Tactical DDD Patterns Through Object Calisthenics
Dorra BARTAGUIZ
 
How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...
Product School
 
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdf
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfSAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdf
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdf
Peter Spielvogel
 
The Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and SalesThe Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and Sales
Laura Byrne
 
Free Complete Python - A step towards Data Science
Free Complete Python - A step towards Data ScienceFree Complete Python - A step towards Data Science
Free Complete Python - A step towards Data Science
RinaMondal9
 
UiPath Test Automation using UiPath Test Suite series, part 3
UiPath Test Automation using UiPath Test Suite series, part 3UiPath Test Automation using UiPath Test Suite series, part 3
UiPath Test Automation using UiPath Test Suite series, part 3
DianaGray10
 
Epistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI supportEpistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI support
Alan Dix
 
FIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdfFIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance
 
DevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA ConnectDevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA Connect
Kari Kakkonen
 
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
UiPathCommunity
 
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Thierry Lestable
 
By Design, not by Accident - Agile Venture Bolzano 2024
By Design, not by Accident - Agile Venture Bolzano 2024By Design, not by Accident - Agile Venture Bolzano 2024
By Design, not by Accident - Agile Venture Bolzano 2024
Pierluigi Pugliese
 
UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4
DianaGray10
 
Accelerate your Kubernetes clusters with Varnish Caching
Accelerate your Kubernetes clusters with Varnish CachingAccelerate your Kubernetes clusters with Varnish Caching
Accelerate your Kubernetes clusters with Varnish Caching
Thijs Feryn
 
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdfFIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance
 
Elizabeth Buie - Older adults: Are we really designing for our future selves?
Elizabeth Buie - Older adults: Are we really designing for our future selves?Elizabeth Buie - Older adults: Are we really designing for our future selves?
Elizabeth Buie - Older adults: Are we really designing for our future selves?
Nexer Digital
 
Generative AI Deep Dive: Advancing from Proof of Concept to Production
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionGenerative AI Deep Dive: Advancing from Proof of Concept to Production
Generative AI Deep Dive: Advancing from Proof of Concept to Production
Aggregage
 

Recently uploaded (20)

FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdfFIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
 
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 previewState of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
 
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
 
Elevating Tactical DDD Patterns Through Object Calisthenics
Elevating Tactical DDD Patterns Through Object CalisthenicsElevating Tactical DDD Patterns Through Object Calisthenics
Elevating Tactical DDD Patterns Through Object Calisthenics
 
How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...
 
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdf
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfSAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdf
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdf
 
The Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and SalesThe Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and Sales
 
Free Complete Python - A step towards Data Science
Free Complete Python - A step towards Data ScienceFree Complete Python - A step towards Data Science
Free Complete Python - A step towards Data Science
 
UiPath Test Automation using UiPath Test Suite series, part 3
UiPath Test Automation using UiPath Test Suite series, part 3UiPath Test Automation using UiPath Test Suite series, part 3
UiPath Test Automation using UiPath Test Suite series, part 3
 
Epistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI supportEpistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI support
 
FIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdfFIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdf
 
DevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA ConnectDevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA Connect
 
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
 
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
 
By Design, not by Accident - Agile Venture Bolzano 2024
By Design, not by Accident - Agile Venture Bolzano 2024By Design, not by Accident - Agile Venture Bolzano 2024
By Design, not by Accident - Agile Venture Bolzano 2024
 
UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4
 
Accelerate your Kubernetes clusters with Varnish Caching
Accelerate your Kubernetes clusters with Varnish CachingAccelerate your Kubernetes clusters with Varnish Caching
Accelerate your Kubernetes clusters with Varnish Caching
 
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdfFIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
 
Elizabeth Buie - Older adults: Are we really designing for our future selves?
Elizabeth Buie - Older adults: Are we really designing for our future selves?Elizabeth Buie - Older adults: Are we really designing for our future selves?
Elizabeth Buie - Older adults: Are we really designing for our future selves?
 
Generative AI Deep Dive: Advancing from Proof of Concept to Production
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionGenerative AI Deep Dive: Advancing from Proof of Concept to Production
Generative AI Deep Dive: Advancing from Proof of Concept to Production
 

Comp4010 Lecture4 AR Tracking and Interaction

  • 1. AR TRACKING AND INTERACTION COMP 4010 Lecture Four Mark Billinghurst August 17th 2021 mark.billinghurst@unisa.edu.au
  • 3. Augmented Reality Definition • Combines Real and Virtual Images • Both can be seen at the same time • Interactive in real-time • The virtual content can be interacted with • Registered in 3D • Virtual objects appear fixed in space
  • 4. Augmented RealityTechnology • Combines Real and Virtual Images • Needs: Display technology • Interactive in real-time • Needs: Input and interaction technology • Registered in 3D • Needs: Viewpoint tracking technology
  • 5. Example: MagicLeap ML-1 AR Display •Display • Multi-layered Waveguide display •Tracking • Inside out SLAM tracking •Input • 6DOF wand, gesture input
  • 6. AR Display Technologies • Classification (Bimber/Raskar 2005) • Head attached • Head mounted display/projector • Body attached • Handheld display/projector • Spatial • Spatially aligned projector/monitor
  • 7. Bimber, O., & Raskar, R. (2005). Spatial augmented reality: merging real and virtual worlds. CRC press. DisplayTaxonomy
  • 8. Types of Head Mounted Displays Occluded See-thru Multiplexed
  • 9. Optical see-through Head-Mounted Display Virtual images from monitors Real World Optical Combiners
  • 10. Optical Design – Curved Mirror ▪ Reflect off free-space curved mirror
  • 12. Example: Varjo XR-1 • Wide field of view • 87 degrees • High resolution • 1920 x 1080 pixel/eye • 1440 x 1600 pixel insert • Low latency stereo cameras • 2 x 12 megapixel • < 20 ms delay • Integrated Eye Tracking
  • 13. Varjo XR-1 Image Quality
  • 14. Multiplexed Display Virtual Image ‘inset’ into Real World
  • 16. SpatialAugmented Reality • Project onto irregular surfaces • Geometric Registration • Projector blending, High dynamic range • Book: Bimber, Rasker “Spatial Augmented Reality”
  • 17. Video MonitorAR Video cameras Monitor Graphics Combiner Video Stereo glasses
  • 18. Magic Mirror AR Experience • See AR overlay of an image of yourself
  • 19. AR RequiresTracking and Registration • Registration • Positioning virtual object wrt real world • Fixing virtual object on real object when view is fixed • Calibration • Offline measurements • Measure camera relative to head mounted display • Tracking • Continually locating the user’s viewpoint when view moving • Position (x,y,z), Orientation (r,p,y)
  • 20. Sources of Registration Errors •Static errors • Optical distortions (in HMD) • Mechanical misalignments • Tracker errors • Incorrect viewing parameters •Dynamic errors • System delays (largest source of error) • 1 ms delay = 1/3 mm registration error
  • 21. Dynamic errors • Total Delay = 50 + 2 + 33 + 17 = 102 ms • 1 ms delay = 1/3 mm = 33mm error Tracking Calculate Viewpoint Simulation Render Scene Draw to Display x,y,z r,p,y Application Loop 20 Hz = 50ms 500 Hz = 2ms 30 Hz = 33ms 60 Hz = 17ms
  • 22. Reducing dynamic errors (1) •Reduce system lag •Faster components/system modules •Reduce apparent lag •Image deflection •Image warping
  • 23. Reducing dynamic errors (2) • Match video + graphics input streams (video AR) • Delay video of real world to match system lag • User doesn’t notice • Predictive Tracking • Inertial sensors helpful Azuma / Bishop 1994
  • 24. Tracking Technologies § Active • Mechanical, Magnetic, Ultrasonic • GPS, Wifi, cell location § Passive • Inertial sensors (compass, accelerometer, gyro) • Computer Vision • Marker based, Natural feature tracking § Hybrid Tracking • Combined sensors (eg Vision + Inertial)
  • 28. Why Optical Tracking for AR? • Many AR devices have cameras • Mobile phone/tablet, Video see-through display • Provides precise alignment between video and AR overlay • Using features in video to generate pixel perfect alignment • Real world has many visual features that can be tracked from • Computer Vision is a well established discipline • Over 40 years of research to draw on • Old non real time algorithms can be run in real time on todays devices
  • 29. Common AR Optical Tracking Types • Marker Tracking • Tracking known artificial markers/images • e.g. ARToolKit square markers • Markerless Tracking • Tracking from known features in real world • e.g. Vuforia image tracking • Unprepared Tracking • Tracking in unknown environment • e.g. SLAM tracking
  • 30. Visual Tracking Approaches • Marker based tracking with artificial features • Make a model before tracking • Model based tracking with natural features • Acquire a model before tracking • Simultaneous localization and mapping • Build a model while tracking it
  • 31. Marker Tracking • Available for more than 20 years • Several open-source solutions exist • ARToolKit, ARTag, ATK+, etc • Fairly simple to implement • Standard computer vision methods • A rectangle provides 4 corner points • Enough for pose estimation!
  • 33. Key Problem: Finding Camera Position • Need camera pose relative to marker to render AR graphics Known image Image in Camera view Overlay AR content
  • 34. Goal: Find Camera Pose • Knowing: • Position of key points in on-screen video image • Camera properties (focal length, image distortion)
  • 36. Coordinates for Marker Tracking Marker Camera •Final Goal •Rotation & Translation 1: Camera Ideal Screen •Perspective model •Obtained from Camera Calibration 2: Ideal Screen Observed Screen •Nonlinear function (barrel shape) •Obtained from Camera Calibration 3: Marker Observed Screen •Correspondence of 4 vertices •Real time image processing
  • 37. Marker Tracking – General Principle 1. Capturing image with known camera 2. Search for quadrilaterals 3. Pose estimation from homography 4. Pose refinement Minimize nonlinear projection error 5. Use final pose 37 1 3 2 4 5 Image: Daniel Wagner
  • 38. Marker Based Tracking: ARToolKit https://github.com/artoolkit
  • 39. MarkerTracking – Fiducial Detection • Threshold the whole image to black and white • Search scanline by scanline for edges (white to black) • Follow edge until either • Back to starting pixel • Image border • Check for size • Reject fiducials early that are too small (or too large)
  • 40. MarkerTracking – Rectangle Fitting • Start with an arbitrary point “x” on the contour • The point with maximum distance must be a corner c0 • Create a diagonal through the center • Find points c1 & c2 with maximum distance left and right of diag. • New diagonal from c1 to c2 • Find point c3 right of diagonal with maximum distance
  • 41. MarkerTracking – Pattern checking • Calculate homography using the 4 corner points • “Direct Linear Transform” algorithm • Maps normalized coordinates to marker coordinates (simple perspective projection, no camera model) • Extract pattern by sampling and check • Id (implicit encoding) • Template (normalized cross correlation)
  • 42. Marker tracking – Pose estimation • Calculates marker pose relative to the camera • Initial estimation directly from homography • Very fast, but coarse with error • Jitters a lot… • Iterative Refinement using Gauss-Newton method • 6 parameters (3 for position, 3 for rotation) to refine • At each iteration we optimize on the error • Iterate
  • 43. Outcome: Camera Transform • Transformation from Marker to Camera • Rotation and Translation TCM : 4x4 transformation matrix from marker coord. to camera coord.
  • 44. Tracking challenges inARToolKit False positives and inter-marker confusion (image by M. Fiala) Image noise (e.g. poor lens, block coding / compression, neon tube) Unfocused camera, motion blur Dark/unevenly lit scene, vignetting Jittering (Photoshop illustration) Occlusion (image by M. Fiala)
  • 46. But - You can’t cover world with ARToolKit Markers!
  • 49. Visual Tracking Approaches • Marker based tracking with artificial features • Make a model before tracking • Model based tracking with natural features • Acquire a model before tracking • Simultaneous localization and mapping • Build a model while tracking it
  • 50. Natural Feature Tracking • Use Natural Cues of Real Elements • Edges • Surface Texture • Interest Points • Model or Model-Free • No visual pollution Contours Features Points Surfaces
  • 51. Natural Features • Detect salient interest points in image • Must be easily found • Location in image should remain stable when viewpoint changes • Requires textured surfaces • Alternative: can use edge features (less discriminative) • Match interest points to tracking model database • Database filled with results of 3D reconstruction • Matching entire (sub-)images is too costly • Typically interest points are compiled into “descriptors” Tracking 51 Image: Gerhard Reitmayr Image: Martin Hirzer
  • 53. Demo: Vuforia Texture Tracking https://www.youtube.com/watch?v=1Qf5Qew5zSU
  • 54. Tracking by Keypoint Detection • This is what most trackers do… • Targets are detected every frame • Popular because tracking and detection are solved simultaneously Keypoint detection Descriptor creation and matching Outlier Removal Pose estimation and refinement Camera Image Pose Recognition
  • 55. Detection and Tracking Detection Incremental tracking Tracking target detected Tracking target lost Tracking target not detected Incremental tracking ok Start + Recognize target type + Detect target + Initialize camera pose + Fast + Robust to blur, lighting changes + Robust to tilt • Tracking and detection are complementary approaches. • After successful detection, the target is tracked incrementally. • If the target is lost, the detection is activated again
  • 56. What is a Keypoint? • It depends on the detector you use! • For high performance use the FAST corner detector • Apply FAST to all pixels of your image • Obtain a set of keypoints for your image • Describe the keypoints Rosten, E., & Drummond, T. (2006, May). Machine learning for high-speed corner detection. In European conference on computer vision (pp. 430-443). Springer Berlin Heidelberg.
  • 57. FAST Corner Keypoint Detection
  • 59. Descriptors • Describe the Keypoint features • Can use SIFT • Estimate the dominant keypoint orientation using gradients • Compensate for detected orientation • Describe the keypoints in terms of the gradients surrounding it Wagner D., Reitmayr G., Mulloni A., Drummond T., Schmalstieg D., Real-Time Detection and Tracking for Augmented Reality on Mobile Phones. IEEE Transactions on Visualization and Computer Graphics, May/June, 2010
  • 60. Database Creation • Offline step – create database of known features • Searching for corners in a static image • For robustness look at corners on multiple scales • Some corners are more descriptive at larger or smaller scales • We don’t know how far users will be from our image • Build a database file with all descriptors and their position on the original image
  • 61. Real-time Tracking • Search for known keypoints in the video • Create the descriptors • Match the descriptors from the live video against those in the database • Brute force is not an option • Need the speed-up of special data structures Keypoint detection Descriptor creation and matching Outlier Removal Pose estimation and refinement Camera Image Pose Recognition
  • 62. NFT – Outlier removal • Removing outlier features • Several removal techniques • Simple geometric tests • Is the keypoint rotation invariant? • Do keypoints remain relative to each other? • Homography-based tests Rotation Invariant
  • 63. NFT – Pose refinement • Pose from homography makes good starting point • Use Gauss-Newton iteration • Try to minimize the re-projection error of the keypoints • Typically, 2-4 iterations are enough..
  • 64. NFT – Real-time tracking • Search for keypoints in the video image • Create the descriptors • Match the descriptors from the live video against those in the database • Remove the keypoints that are outliers • Use the remaining keypoints to calculate the pose of the camera Keypoint detection Descriptor creation and matching Outlier Removal Pose estimation and refinement Camera Image Pose Recognition
  • 65. Example Target Image Feature Detection AR Overlay
  • 67. Edge Based Tracking • Example: RAPiD [Drummond et al. 02] • Initialization, Control Points, Pose Prediction (Global Method)
  • 68. Demo: Edge Based Tracking
  • 69. Line Based Tracking • Visual Servoing [Comport et al. 2004]
  • 70. 3D Model BasedTracking • Tracking from 3D object shape • Align detected features to 3D object model • Examples • SnapChat Face tracking • Mechanical part tracking • Vehicle tracking • Etc..
  • 71. Typical Model Based Tracking Algorithm
  • 72. Example: Vuforia Model Tracker • Uses pre-captured 3D model for tracking • On-screen guide to line up model
  • 74. Taxonomy of Model Based Tracking Lowney, M., & Raj, A. S. (2016). Model based tracking for augmented reality on mobile devices.
  • 75. Marker vs.Natural FeatureTracking • Marker tracking • Usually requires no database to be stored • Markers can be an eye-catcher • Tracking is less demanding • The environment must be instrumented • Markers usually work only when fully in view • Natural feature tracking • A database of keypoints must be stored/downloaded • Natural feature targets might catch the attention less • Natural feature targets are potentially everywhere • Natural feature targets work also if partially in view
  • 76. Visual Tracking Approaches • Marker based tracking with artificial features • Make a model before tracking • Model based tracking with natural features • Acquire a model before tracking • Simultaneous localization and mapping • Build a model while tracking it
  • 78. Tracking from an Unknown Environment • What to do when you don’t know any features? • Very important problem in mobile robotics - Where am I? • SLAM • Simultaneously Localize And Map the environment • Goal: to recover both camera pose and map structure while initially knowing neither. • Mapping: • Building a map of the environment which the robot is in • Localisation: • Navigating this environment using the map while keeping track of the robot’s relative position and orientation
  • 79. Parallel Tracking and Mapping Tracking Mapping New keyframes Map updates + Estimate camera pose + For every frame + Extend map + Improve map + Slow updates rate Parallel tracking and mapping uses two concurrent threads, one for tracking and one for mapping, which run at different speeds
  • 80. Parallel Tracking and Mapping Video stream New frames Map updates Tracking Mapping Tracked local pose FAST SLOW Simultaneous localization and mapping (SLAM) in small workspaces Klein/Drummond, U. Cambridge
  • 81. Visual SLAM • Early SLAM systems (1986 - ) • Computer visions and sensors (e.g. IMU, laser, etc.) • One of the most important algorithms in Robotics • Visual SLAM • Using cameras only, such as stereo view • MonoSLAM (single camera) developed in 2007 (Davidson)
  • 83. How SLAMWorks • Three main steps 1. Tracking a set of points through successive camera frames 2. Using these tracks to triangulate their 3D position 3. Simultaneously use the estimated point locations to calculate the camera pose which could have observed them • By observing a sufficient number of points can solve for both structure and motion (camera path and scene structure).
  • 84. Evolution of SLAM Systems • MonoSLAM (Davidson, 2007) • Real time SLAM from single camera • PTAM (Klein, 2009) • First SLAM implementation on mobile phone • FAB-MAP (Cummins, 2008) • Probabilistic Localization and Mapping • DTAM (Newcombe, 2011) • 3D surface reconstruction from every pixel in image • KinectFusion (Izadi, 2011) • Realtime dense surface mapping and tracking using RGB-D
  • 86. LSD-SLAM (Engel 2014) • A novel, direct monocular SLAM technique • Uses image intensities both for tracking and mapping. • The camera is tracked using direct image alignment, while • Geometry is estimated as semi-dense depth maps • Supports very large-scale tracking • Runs in real time on CPU and smartphone
  • 88. Direct Method vs. Feature Based • Direct uses all information in image, cf feature based approach that only use small patches around corners and edges
  • 89. Applications of SLAM Systems • Many possible applications • Augmented Reality camera tracking • Mobile robot localisation • Real world navigation aid • 3D scene reconstruction • 3D Object reconstruction • Etc.. • Assumptions • Camera moves through an unchanging scene • So not suitable for person tracking, gesture recognition • Both involve non-rigidly deforming objects and a non-static map
  • 90. Hybrid Tracking Interfaces • Combine multiple tracking technologies together • Active-Passive: Magnetic, Vision • Active-Inertial: Vison, inertial • Passoive-Inertial: Compass, inertial
  • 91. Combining Sensors andVision • Sensors • Produces noisy output (= jittering augmentations) • Are not sufficiently accurate (= wrongly placed augmentations) • Gives us first information on where we are in the world, and what we are looking at • Vision • Is more accurate (= stable and correct augmentations) • Requires choosing the correct keypoint database to track from • Requires registering our local coordinate frame (online- generated model) to the global one (world)
  • 93. Types of Sensor Fusion • Complementary • Combining sensors with different degrees of freedom • Sensors must be synchronized (or requires inter-/extrapolation) • E.g., combine position-only and orientation-only sensor • E.g., orthogonal 1D sensors in gyro or magnetometer are complementary • Competitive • Different sensor types measure the same degree of freedom • Redundant sensor fusion • Use worse sensor only if better sensor is unavailable • E.g., GPS + pedometer • Statistical sensor fusion www.augmentedrealitybook.org Tracking 93
  • 94. Example: Outdoor Hybrid Tracking • Combines • computer vision • inertial gyroscope sensors • Both correct for each other • Inertial gyro • provides frame to frame prediction of camera orientation, fast sensing • drifts over time • Computer vision • Natural feature tracking, corrects for gyro drift • Slower, less accurate
  • 95. Robust OutdoorTracking • HybridTracking • ComputerVision, GPS, inertial • Going Out • Reitmayr & Drummond (Univ. Cambridge) Reitmayr, G., & Drummond, T. W. (2006). Going out: robust model-based tracking for outdoor augmented reaity. In Mixed and Augmented Reality, 2006. ISMAR 2006. IEEE/ACM International Symposium on (pp. 109-118). IEEE.
  • 97. Demo: Going Out Hybrid Tracking
  • 98. ARKit – Visual Inertial Odometry • Uses both computer vision + inertial sensing • Tracking position twice • Computer Vision – feature tracking, 2D plane tracking • Inertial sensing – using the phone IMU • Output combined via Kalman filter • Determine which output is most accurate • Pass pose to ARKit SDK • Each system compliments the other • Computer vision – needs visual features • IMU - drifts over time, doesn’t need features
  • 99. ARKit –Visual Inertial Odometry • Slow camera • Fast IMU • If camera drops out IMU takes over • Camera corrects IMU errors
  • 101. Conclusions • Tracking and Registration are key problems • Registration error • Measures against static error • Measures against dynamic error • AR typically requires multiple tracking technologies • Computer vision most popular • Research Areas: • SLAM systems, Deformable models, Mobile outdoor tracking
  • 102. More Information Fua, P., & Lepetit, V. (2007). Vision based 3D tracking and pose estimation for mixed reality. In Emerging technologies of augmented reality: Interfaces and design (pp. 1-22). IGI Global.
  • 104. Augmented Reality technology • Combines Real and Virtual Images • Needs: Display technology • Interactive in real-time • Needs: Input and interaction technology • Registered in 3D • Needs: Viewpoint tracking technology
  • 105. How Do You Design an Interface for This?
  • 106. AR Interaction • Designing AR Systems = Interface Design • Using different input and output technologies • Objective is a high quality of user experience • Ease of use and learning • Performance and satisfaction
  • 107. Typical Interface Design Path 1/ Prototype Demonstration 2/ Adoption of Interaction Techniques from other interface metaphors 3/ Development of new interface metaphors appropriate to the medium 4/ Development of formal theoretical models for predicting and modeling user actions Desktop WIMP Virtual Reality Augmented Reality
  • 108. Interacting with AR Content • You can see spatially registered AR.. how can you interact with it?
  • 109. Different Types of AR Interaction • Browsing Interfaces • simple (conceptually!), unobtrusive • 3D AR Interfaces • expressive, creative, require attention • Tangible Interfaces • Embedded into conventional environments • Tangible AR • Combines TUI input + AR display
  • 110. AR Interfaces as Data Browsers • 2D/3D virtual objects are registered in 3D • “VR in Real World” • Interaction • 2D/3D virtual viewpoint control • Applications • Visualization, training
  • 111. AR Information Browsers • Information is registered to real-world context • Hand held AR displays • Interaction • Manipulation of a window into information space • Applications • Context-aware information displays Rekimoto, et al. 1997
  • 114. Current AR Information Browsers • Mobile AR • GPS + compass • Many Applications • Wikitude • Yelp • Google maps • …
  • 115. Example: Google Maps AR Mode • AR Navigation Aid • GPS + compass, 2D/3D object placement
  • 116.
  • 117. Advantages and Disadvantages • Important class of AR interfaces • Wearable computers • AR simulation, training • Limited interactivity • Modification of virtual content is difficult Rekimoto, et al. 1997
  • 118. 3D AR Interfaces • Virtual objects displayed in 3D physical space and manipulated • HMDs and 6DOF head-tracking • 6DOF hand trackers for input • Interaction • Viewpoint control • Traditional 3D user interface interaction: manipulation, selection, etc. Kiyokawa, et al. 2000
  • 119. AR 3D Interaction (2000)
  • 121.
  • 122. Advantages and Disadvantages • Important class of AR interfaces • Entertainment, design, training • Advantages • User can interact with 3D virtual object everywhere in space • Natural, familiar interaction • Disadvantages • Usually no tactile feedback • User has to use different devices for virtual and physical objects Oshima, et al. 2000