SlideShare a Scribd company logo
COSC 426: Augmented Reality

            Mark Billinghurst
      mark.billinghurst@hitlabnz.org

             Sept 19th 2012

    Lecture 9: AR Research Directions
Looking to the Future
The Future is with us
It takes at least 20 years for new
      technologies to go from the lab to the
      lounge..
“The technologies that will significantly affect
      our lives over the next 10 years have been
      around for a decade.
The future is with us.The trick is learning how to
      spot it.The commercialization of research,
      in other words, is far more about              Oct 11th 2004	

      prospecting than alchemy.”
                                Bill Buxton
Research Directions

  experiences
                 Usability

  applications   Interaction


      tools      Authoring


  components     Tracking, Display



                               Sony CSL © 2004
Research Directions
  Components
    Markerless tracking, hybrid tracking
    Displays, input devices
  Tools
    Authoring tools, user generated content
  Applications
    Interaction techniques/metaphors
  Experiences
    User evaluation, novel AR/MR experiences
HMD Design
Occlusion with See-through HMD
  The Problem
      Occluding real objects with virtual
      Occluding virtual objects with real




        Real Scene           Current See-through HMD
ELMO (Kiyokawa 2001)
  Occlusive see-through HMD
     Masking LCD
     Real time range finding
ELMO Demo
ELMO Design
                             Virtual images
                             from LCD
        Depth
        Sensing   LCD Mask
Real
World
                             Optical
                             Combiner

     Use LCD mask to block real world
     Depth sensing for occluding virtual images
ELMO Results
Future Displays




  Always on, unobtrusive
Google Glasses
Contact Lens Display
  Babak Parviz
    University Washington
  MEMS components
    Transparent elements
    Micro-sensors
  Challenges
    Miniaturization
    Assembly
    Eye-safe
Contact Lens Prototype
Applications
Interaction Techniques
  Input techniques
      3D vs. 2D input
      Pen/buttons/gestures
  Natural Interaction
     Speech + gesture input
  Intelligent Interfaces
     Artificial agents
     Context sensing
Flexible Displays
  Flexible Lens Surface
     Bimanual interaction
     Digital paper analogy




                    Red Planet, 2000
Sony CSL © 2004
Sony CSL © 2004
Tangible User Interfaces (TUIs)
  GUMMI bendable display prototype
    Reproduced by permission of Sony CSL
Sony CSL © 2004
Sony CSL © 2004
Lucid Touch
  Microsoft Research & Mitsubishi Electric Research Labs
  Wigdor, D., Forlines, C., Baudisch, P., Barnwell, J., Shen, C.
   LucidTouch: A See-Through Mobile Device
   In Proceedings of UIST 2007, Newport, Rhode Island, October 7-10, 2007,
   pp. 269–278.
Auditory Modalities
  Auditory
     auditory icons
     earcons
     speech synthesis/recognition

     Nomadic Radio (Sawhney)
      -  combines spatialized audio
      -  auditory cues
      -  speech synthesis/recognition
Gestural interfaces
  1. Micro-gestures
     (unistroke, smartPad)
  2. Device-based gestures
     (tilt based examples)
  3. Embodied interaction
     (eye toy)
Natural Gesture Interaction on Mobile




  Use mobile camera for hand tracking
    Fingertip detection
Evaluation




  Gesture input more than twice as slow as touch
  No difference in naturalness
Haptic Modalities
      Haptic interfaces
            Simple uses in mobiles? (vibration instead of ringtone)
            Sony’s Touchengine
             -  physiological experiments show you can perceive two stimulus 5ms
                apart, and spaced as low as 0.2 microns


   4 µm            n層	
 
28 µm

                    n層	
 


        V
Haptic Input




  AR Haptic Workbench
    CSIRO 2003 – Adcock et. al.
AR Haptic Interface




  Phantom, ARToolKit, Magellan
Natural Interaction
The Vision of AR
To Make the Vision Real..
  Hardware/software requirements
    Contact lens displays
    Free space hand/body tracking
    Environment recognition
    Speech/gesture recognition
    Etc..
Natural Interaction
  Automatically detecting real environment
    Environmental awareness
    Physically based interaction
  Gesture Input
    Free-hand interaction
  Multimodal Input
    Speech and gesture interaction
    Implicit rather than Explicit interaction
Environmental Awareness
AR MicroMachines
  AR experience with environment awareness
   and physically-based interaction
    Based on MS Kinect RGB-D sensor
  Augmented environment supports
    occlusion, shadows
    physically-based interaction between real and
     virtual objects
Operating Environment
Architecture
  Our framework uses five libraries:

    OpenNI
    OpenCV
    OPIRA
    Bullet Physics
    OpenSceneGraph
System Flow
  The system flow consists of three sections:
     Image Processing and Marker Tracking
     Physics Simulation
     Rendering
Physics Simulation




  Create virtual mesh over real world
  Update at 10 fps – can move real objects
  Use by physics engine for collision detection (virtual/real)
  Use by OpenScenegraph for occlusion and shadows
Rendering




Occlusion               Shadows
Natural Gesture Interaction
Mo#va#on	
  
                                  AR	
  MicroMachines	
  and	
  PhobiAR	
  




 • 	
  	
  Treated	
  the	
  environment	
  as	
  	
  
 	
  	
  	
  	
  sta/c	
  –	
  no	
  tracking	
  


                                                             • 	
  	
  Tracked	
  objects	
  in	
  2D	
  

More	
  realis#c	
  interac#on	
  requires	
  3D	
  gesture	
  tracking	
  	
  	
  
Mo#va#on	
  
                                                 Occlusion	
  Issues	
  
AR	
  MicroMachines	
  only	
  achieved	
  realis/c	
  occlusion	
  because	
  the	
  user’s	
  viewpoint	
  matched	
  the	
  Kinect’s	
  




Proper	
  occlusion	
  requires	
  a	
  more	
  complete	
  model	
  of	
  scene	
  objects	
  
HITLabNZ’s Gesture Library




   Architecture
                  5. Gesture

•  Static Gestures
•  Dynamic Gestures
•  Context based Gestures
                 4. Modeling
•  Hand recognition/modeling
•  Rigid-body modeling
      3. Classification/Tracking

             2. Segmentation

         1. Hardware Interface
HITLabNZ’s Gesture Library




                               Architecture
         5. Gesture                     o  Supports PCL, OpenNI, OpenCV, and Kinect SDK.
                                        o  Provides access to depth, RGB, XYZRGB.
•  Static Gestures                      o  Usage: Capturing color image, depth image and concatenated
•  Dynamic Gestures                        point clouds from a single or multiple cameras
                                        o  For example:
•  Context based Gestures
        4. Modeling
•  Hand recognition/
   modeling                                                           Kinect for Xbox 360
•  Rigid-body modeling
  3. Classification/Tracking                                          Kinect for Windows

      2. Segmentation
                                                                      Asus Xtion Pro Live
   1. Hardware Interface
HITLabNZ’s Gesture Library




                               Architecture
         5. Gesture                 o  Segment images and point clouds based on color, depth and
                                       space.
•  Static Gestures                  o  Usage: Segmenting images or point clouds using color
•  Dynamic Gestures                    models, depth, or spatial properties such as location, shape
                                       and size.
•  Context based Gestures           o  For example:
        4. Modeling
•  Hand recognition/
   modeling
•  Rigid-body modeling                                             Skin color segmentation
  3. Classification/Tracking

      2. Segmentation                                                   Depth threshold

   1. Hardware Interface
HITLabNZ’s Gesture Library




                               Architecture
         5. Gesture                     o  Identify and track objects between frames based on
                                           XYZRGB.
•  Static Gestures                      o  Usage: Identifying current position/orientation of the tracked
•  Dynamic Gestures                        object in space.
•  Context based Gestures               o  For example:

        4. Modeling
•  Hand recognition/                                                      Training set of hand
   modeling                                                               poses, colors
•  Rigid-body modeling                                                    represent unique
                                                                          regions of the hand.
  3. Classification/Tracking

      2. Segmentation                                                       Raw output (without-
                                                                            cleaning) classified
                                                                            on real hand input
   1. Hardware Interface
                                                                            (depth image).
HITLabNZ’s Gesture Library




                               Architecture
         5. Gesture                     o  Hand Recognition/Modeling
                                               Skeleton based (for low resolution
•  Static Gestures                               approximation)
•  Dynamic Gestures                            Model based (for more accurate
•  Context based Gestures                        representation)
                                        o  Object Modeling (identification and tracking rigid-
        4. Modeling                        body objects)
•  Hand recognition/                    o  Physical Modeling (physical interaction)
   modeling                                    Sphere Proxy
•  Rigid-body modeling                         Model based
                                               Mesh based
  3. Classification/Tracking            o  Usage: For general spatial interaction in AR/VR
                                           environment
      2. Segmentation

   1. Hardware Interface
Method	
  
Represent	
  models	
  as	
  collec#ons	
  of	
  spheres	
  moving	
  with	
  the	
  
   models	
  in	
  the	
  Bullet	
  physics	
  engine	
  
Method	
  
Render	
  AR	
  scene	
  with	
  OpenSceneGraph,	
  using	
  depth	
  map	
  
   for	
  occlusion	
  




                     Shadows	
  yet	
  to	
  be	
  implemented	
  
Results
HITLabNZ’s Gesture Library




                               Architecture
         5. Gesture                    o  Static (hand pose recognition)
                                       o  Dynamic (meaningful movement recognition)
•  Static Gestures                     o  Context-based gesture recognition (gestures with context,
•  Dynamic Gestures                       e.g. pointing)
                                       o  Usage: Issuing commands/anticipating user intention and high
•  Context based Gestures                 level interaction.
        4. Modeling
•  Hand recognition/
   modeling
•  Rigid-body modeling
  3. Classification/Tracking

      2. Segmentation

   1. Hardware Interface
Multimodal Interaction
Multimodal Interaction
  Combined speech input
  Gesture and Speech complimentary
    Speech
     -  modal commands, quantities
    Gesture
     -  selection, motion, qualities
  Previous work found multimodal interfaces
   intuitive for 2D/3D graphics interaction
1. Marker Based Multimodal Interface




  Add speech recognition to VOMAR
  Paddle + speech commands
Commands Recognized
  Create Command "Make a blue chair": to create a virtual
   object and place it on the paddle.
  Duplicate Command "Copy this": to duplicate a virtual object
   and place it on the paddle.
  Grab Command "Grab table": to select a virtual object and
   place it on the paddle.
  Place Command "Place here": to place the attached object in
   the workspace.
  Move Command "Move the couch": to attach a virtual object
   in the workspace to the paddle so that it follows the paddle
   movement.
System Architecture
Object Relationships




"Put chair behind the table”
Where is behind?
                               View specific regions
User Evaluation
  Performance time
     Speech + static paddle significantly faster




  Gesture-only condition less accurate for position/orientation
  Users preferred speech + paddle input
Subjective Surveys
2. Free Hand Multimodal Input
  Use free hand to interact with AR content
  Recognize simple gestures
  No marker tracking




        Point          Move          Pick/Drop
Multimodal Architecture
Multimodal Fusion
Hand Occlusion
User Evaluation



  Change object shape, colour and position
  Conditions
    Speech only, gesture only, multimodal
  Measure
    performance time, error, subjective survey
Experimental Setup




Change object shape
  and colour
Results
  Average performance time (MMI, speech fastest)
     Gesture: 15.44s
     Speech: 12.38s
     Multimodal: 11.78s
  No difference in user errors
  User subjective survey
     Q1: How natural was it to manipulate the object?
      -  MMI, speech significantly better
     70% preferred MMI, 25% speech only, 5% gesture only
Intelligent Interfaces
Intelligent Interfaces
  Most AR systems stupid
    Don’t recognize user behaviour
    Don’t provide feedback
    Don’t adapt to user
  Especially important for training
    Scaffolded learning
    Moving beyond check-lists of actions
Intelligent Interfaces




  AR interface + intelligent tutoring system
    ASPIRE constraint based system (from UC)
    Constraints
     -  relevance cond., satisfaction cond., feedback
Domain Ontology
Intelligent Feedback




  Actively monitors user behaviour
     Implicit vs. explicit interaction
  Provides corrective feedback
Evaluation Results
  16 subjects, with and without ITS
  Improved task completion




  Improved learning
Intelligent Agents
  AR characters
    Virtual embodiment of system
    Multimodal input/output
  Examples
    AR Lego, Welbo, etc
    Mr Virtuoso
     -  AR character more real, more fun
     -  On-screen 3D and AR similar in usefulness
Context Sensing
Context Sensing
  TKK Project
  Using context to
   manage information
  Context from
    Speech
    Gaze
    Real world
  AR Display
Gaze Interaction
AR View
More Information Over Time
Experiences
Novel Experiences
  Crossing Boundaries
     Ubiquitous VR/AR
  Collaborative Experiences
  Massive AR
     AR + Social Networking
  Usability
Crossing Boundaries




           Jun Rekimoto, Sony CSL
Invisible Interfaces




            Jun Rekimoto, Sony CSL
Milgram’s Reality-Virtuality continuum

                       Mixed Reality


   Real        Augmented           Augmented          Virtual
Environment    Reality (AR)       Virtuality (AV)   Environment




              Reality - Virtuality (RV) Continuum
The MagicBook




Reality   Augmented      Augmented         Virtuality
          Reality (AR)   Virtuality (AV)
Invisible Interfaces




            Jun Rekimoto, Sony CSL
Example: Visualizing Sensor Networks
  Rauhala et. al. 2007 (Linkoping)
  Network of Humidity Sensors
    ZigBee wireless communication
  Use Mobile AR to Visualize Humidity
Invisible Interfaces




            Jun Rekimoto, Sony CSL
UbiVR – CAMAR 



                          CAMAR Controller

          CAMAR Viewer




                         CAMAR Companion

GIST - Korea
ubiHome @ GIST
        Media services       Light service        MR window



    ubiTrack



 Where/When
     Tag-it
                                                         ubiKey

                                ©ubiHome
                                                      Who/What/
What/When/How                                         When/How
                PDA      Couch Sensor        Door Sensor



    Who/What/When/How      When/How           When/How
CAMAR - GIST
 (CAMAR: Context-Aware Mobile
  Augmented Reality)
  UCAM: Architecture                           wear-UCAM



                          Content

    Sensor
                          Service
                     (Integrator,Manager,
                 Interpreter,ServiceProvider)


             Context Interface


             Network Interface
                                                ubi-UCAM

  BAN/PAN              TCP/IP
    (BT)      (Discovery,Control,Event)

             Operating System
                                                vr-UCAM
Hybrid User Inerfaces
Goal: To incorporate AR into normal meeting
 environment
  Physical Components
      Real props
  Display Elements
     2D and 3D (AR) displays
  Interaction Metaphor
     Use multiple tools – each relevant for the task
Hybrid User Interfaces


    1                   2                 3                   4
   PERSONAL         TABLETOP        WHITEBOARD          MULTIGROUP




Private Display   Private Display   Private Display   Private Display
                  Group Display     Public Display    Group Display
                                                      Public Display
Ubiquitous
                          UbiComp
                                      Ubi AR

                                                Ubi VR




Weiser                              Mobile AR




                         Desktop       AR       VR

          Terminal


                      Reality                   Virtual Reality



                                     Milgram


                                                         From: Joe Newman
Massive
Multi User




                                     Ubiquitous

                                 r
                            Weise
                 Terminal
Single User

             Reality
                       Milg
                           ram


                                             VR
Remote Collaboration
AR Client




  HMD and HHD
    Showing virtual images over real world
    Images drawn by remote expert
    Local interaction
Shared Visual Context (Fussell ,1999)




  Remote video collaboration
    Shared manual, video viewing
    Compared Video, Audio, Side-by-side collaboration
    Communication analysis
WACL(Kurata,2004)




  Wearable Camera/Laser Pointer
    Independent pointer control
    Remote panorama view
WACL(Kurata,2004)




  Remote Expert View
    Panorama viewing, annotation, image capture
As If Being There (Poelman, 2012)




  AR + Scene Capture
    HMD viewing, remote expert
    Gesture input
    Scene capture (PTAM), stereo camera
As If Being There (Poelman, 2012)




  Gesture Interaction
    Hand postures recognized
    Menu superimposed on hands
Real World Capture




  Using Kinect for 3D Scene Capture
    Camera tracking
    AR overlay
    Remote situational awareness
Remote scene capture with AR annotations added
Future Directions
             SLIDE 116



                   Massive Multiuser
  Handheld AR for the first time allows extremely high
   numbers of AR users
  Requires
      New types of applications/games
      New infrastructure (server/client/peer-to-peer)
      Content distribution…
Massive MultiUser
  2D Applications
     MSN – 29 million
     Skype – 10 million
     Facebook – 100m+
  3D/VR Applications
     SecondLife > 50K
     Stereo projection - <500
  Augmented Reality
     Shared Space (1999) - 4
     Invisible Train (2004) - 8
BASIC VIEW
PERSONAL VIEW
Augmented Reality 2.0 Infrastructure
Leveraging Web 2.0
  Content retrieval using HTTP
  XML encoded meta information
     KML placemarks + extensions
  Queries
     Based on location (from GPS, image recognition)
     Based on situation (barcode markers)
  Queries also deliver tracking feature databases
  Everybody can set up an AR 2.0 server
  Syndication:
     Community servers for end-user content
     Tagging
  AR client subscribes to arbitrary number of feeds
Content
  Content creation and delivery
    Content creation pipeline
    Delivering previously unknown content
  Streaming of
    Data (objects, multi-media)
    Applications
  Distribution
    How do users learn about all that content?
    How do they access it?
ARML (AR Markup Language)
Scaling Up




  AR on a City Scale
  Using mobile phone as ubiquitous sensor
  MIT Senseable City Lab
    http://senseable.mit.edu/
WikiCity Rome (Senseable City Lab MIT)
Conclusions
AR Research in the HIT Lab NZ
  Gesture interaction
     Gesture library
  Multimodal interaction
     Collaborative speech/gesture interfaces
  Mobile AR interfaces
     Outdoor AR, interaction methods, navigation tools
  AR authoring tools
     Visual programming for AR
  Remote Collaboration
     Mobile AR for remote interaction
More Information
•  Mark Billinghurst
  –  mark.billinghurst@hitlabnz.org
•  Websites
  –  http://www.hitlabnz.org/
  –  http://artoolkit.sourceforge.net/
  –  http://www.osgart.org/
  –  http://www.hitlabnz.org/wiki/buildAR/

More Related Content

What's hot

2013 426 Lecture 2: Augmented Reality Technology
2013 426 Lecture 2:  Augmented Reality Technology2013 426 Lecture 2:  Augmented Reality Technology
2013 426 Lecture 2: Augmented Reality Technology
Mark Billinghurst
 
SVR2011 Keynote
SVR2011 KeynoteSVR2011 Keynote
SVR2011 Keynote
Mark Billinghurst
 
2013 Lecture 8: Mobile AR
2013 Lecture 8: Mobile AR2013 Lecture 8: Mobile AR
2013 Lecture 8: Mobile AR
Mark Billinghurst
 
Hands and Speech in Space: Multimodal Input for Augmented Reality
Hands and Speech in Space: Multimodal Input for Augmented Reality Hands and Speech in Space: Multimodal Input for Augmented Reality
Hands and Speech in Space: Multimodal Input for Augmented Reality
Mark Billinghurst
 
Comp4010 Lecture9 VR Input and Systems
Comp4010 Lecture9 VR Input and SystemsComp4010 Lecture9 VR Input and Systems
Comp4010 Lecture9 VR Input and Systems
Mark Billinghurst
 
Lecture 5: 3D User Interfaces for Virtual Reality
Lecture 5: 3D User Interfaces for Virtual RealityLecture 5: 3D User Interfaces for Virtual Reality
Lecture 5: 3D User Interfaces for Virtual Reality
Mark Billinghurst
 
User Interfaces and User Centered Design Techniques for Augmented Reality and...
User Interfaces and User Centered Design Techniques for Augmented Reality and...User Interfaces and User Centered Design Techniques for Augmented Reality and...
User Interfaces and User Centered Design Techniques for Augmented Reality and...
Stuart Murphy
 
Mixed Reality in the Workspace
Mixed Reality in the WorkspaceMixed Reality in the Workspace
Mixed Reality in the Workspace
Mark Billinghurst
 
Empathic Glasses
Empathic GlassesEmpathic Glasses
Empathic Glasses
Mark Billinghurst
 
COMP 4010 Lecture 9 AR Interaction
COMP 4010 Lecture 9 AR InteractionCOMP 4010 Lecture 9 AR Interaction
COMP 4010 Lecture 9 AR Interaction
Mark Billinghurst
 
COMP 4010 - Lecture 5: Interaction Design for Virtual Reality
COMP 4010 - Lecture 5: Interaction Design for Virtual RealityCOMP 4010 - Lecture 5: Interaction Design for Virtual Reality
COMP 4010 - Lecture 5: Interaction Design for Virtual Reality
Mark Billinghurst
 
Designing Usable Interface
Designing Usable InterfaceDesigning Usable Interface
Designing Usable Interface
Mark Billinghurst
 
426 lecture6b: AR Interaction
426 lecture6b: AR Interaction426 lecture6b: AR Interaction
426 lecture6b: AR Interaction
Mark Billinghurst
 
426 lecture 4: AR Developer Tools
426 lecture 4: AR Developer Tools426 lecture 4: AR Developer Tools
426 lecture 4: AR Developer Tools
Mark Billinghurst
 
Research Directions in Transitional Interfaces
Research Directions in Transitional InterfacesResearch Directions in Transitional Interfaces
Research Directions in Transitional Interfaces
Mark Billinghurst
 
Virtual Reality
Virtual RealityVirtual Reality
Virtual Reality
renoy reji
 
UX workshop
UX workshopUX workshop
UX workshop
Jonathan Wong
 
Tangible AR Interface
Tangible AR InterfaceTangible AR Interface
Tangible AR Interface
JongHyoun
 
COMP 4010 Lecture7 3D User Interfaces for Virtual Reality
COMP 4010 Lecture7 3D User Interfaces for Virtual RealityCOMP 4010 Lecture7 3D User Interfaces for Virtual Reality
COMP 4010 Lecture7 3D User Interfaces for Virtual Reality
Mark Billinghurst
 
COMP 4010: Lecture 4 - 3D User Interfaces for VR
COMP 4010: Lecture 4 - 3D User Interfaces for VRCOMP 4010: Lecture 4 - 3D User Interfaces for VR
COMP 4010: Lecture 4 - 3D User Interfaces for VR
Mark Billinghurst
 

What's hot (20)

2013 426 Lecture 2: Augmented Reality Technology
2013 426 Lecture 2:  Augmented Reality Technology2013 426 Lecture 2:  Augmented Reality Technology
2013 426 Lecture 2: Augmented Reality Technology
 
SVR2011 Keynote
SVR2011 KeynoteSVR2011 Keynote
SVR2011 Keynote
 
2013 Lecture 8: Mobile AR
2013 Lecture 8: Mobile AR2013 Lecture 8: Mobile AR
2013 Lecture 8: Mobile AR
 
Hands and Speech in Space: Multimodal Input for Augmented Reality
Hands and Speech in Space: Multimodal Input for Augmented Reality Hands and Speech in Space: Multimodal Input for Augmented Reality
Hands and Speech in Space: Multimodal Input for Augmented Reality
 
Comp4010 Lecture9 VR Input and Systems
Comp4010 Lecture9 VR Input and SystemsComp4010 Lecture9 VR Input and Systems
Comp4010 Lecture9 VR Input and Systems
 
Lecture 5: 3D User Interfaces for Virtual Reality
Lecture 5: 3D User Interfaces for Virtual RealityLecture 5: 3D User Interfaces for Virtual Reality
Lecture 5: 3D User Interfaces for Virtual Reality
 
User Interfaces and User Centered Design Techniques for Augmented Reality and...
User Interfaces and User Centered Design Techniques for Augmented Reality and...User Interfaces and User Centered Design Techniques for Augmented Reality and...
User Interfaces and User Centered Design Techniques for Augmented Reality and...
 
Mixed Reality in the Workspace
Mixed Reality in the WorkspaceMixed Reality in the Workspace
Mixed Reality in the Workspace
 
Empathic Glasses
Empathic GlassesEmpathic Glasses
Empathic Glasses
 
COMP 4010 Lecture 9 AR Interaction
COMP 4010 Lecture 9 AR InteractionCOMP 4010 Lecture 9 AR Interaction
COMP 4010 Lecture 9 AR Interaction
 
COMP 4010 - Lecture 5: Interaction Design for Virtual Reality
COMP 4010 - Lecture 5: Interaction Design for Virtual RealityCOMP 4010 - Lecture 5: Interaction Design for Virtual Reality
COMP 4010 - Lecture 5: Interaction Design for Virtual Reality
 
Designing Usable Interface
Designing Usable InterfaceDesigning Usable Interface
Designing Usable Interface
 
426 lecture6b: AR Interaction
426 lecture6b: AR Interaction426 lecture6b: AR Interaction
426 lecture6b: AR Interaction
 
426 lecture 4: AR Developer Tools
426 lecture 4: AR Developer Tools426 lecture 4: AR Developer Tools
426 lecture 4: AR Developer Tools
 
Research Directions in Transitional Interfaces
Research Directions in Transitional InterfacesResearch Directions in Transitional Interfaces
Research Directions in Transitional Interfaces
 
Virtual Reality
Virtual RealityVirtual Reality
Virtual Reality
 
UX workshop
UX workshopUX workshop
UX workshop
 
Tangible AR Interface
Tangible AR InterfaceTangible AR Interface
Tangible AR Interface
 
COMP 4010 Lecture7 3D User Interfaces for Virtual Reality
COMP 4010 Lecture7 3D User Interfaces for Virtual RealityCOMP 4010 Lecture7 3D User Interfaces for Virtual Reality
COMP 4010 Lecture7 3D User Interfaces for Virtual Reality
 
COMP 4010: Lecture 4 - 3D User Interfaces for VR
COMP 4010: Lecture 4 - 3D User Interfaces for VRCOMP 4010: Lecture 4 - 3D User Interfaces for VR
COMP 4010: Lecture 4 - 3D User Interfaces for VR
 

Viewers also liked

Aesthetec at MEIC5, augmenting the world
Aesthetec at MEIC5, augmenting the worldAesthetec at MEIC5, augmenting the world
Aesthetec at MEIC5, augmenting the world
Aesthetec Studio
 
Designing Augmented Reality Experiences
Designing Augmented Reality ExperiencesDesigning Augmented Reality Experiences
Designing Augmented Reality Experiences
Mark Billinghurst
 
Designing Augmented Reality Experiences for Mobile
Designing Augmented Reality Experiences for MobileDesigning Augmented Reality Experiences for Mobile
Designing Augmented Reality Experiences for Mobile
TryMyUI
 
2013 Lecture3: AR Tracking
2013 Lecture3: AR Tracking 2013 Lecture3: AR Tracking
2013 Lecture3: AR Tracking
Mark Billinghurst
 
Augmentet Reality, Smart Cities - Quo Vadis, Digitalisierung
Augmentet Reality, Smart Cities - Quo Vadis, DigitalisierungAugmentet Reality, Smart Cities - Quo Vadis, Digitalisierung
Augmentet Reality, Smart Cities - Quo Vadis, Digitalisierung
Matthias Stürmer
 
Experience Design for Mobile Augmented Reality
Experience Design for Mobile Augmented RealityExperience Design for Mobile Augmented Reality
Experience Design for Mobile Augmented RealityLightning Laboratories
 
Designing Outstanding AR Experiences
Designing Outstanding AR ExperiencesDesigning Outstanding AR Experiences
Designing Outstanding AR Experiences
Mark Billinghurst
 
Augmented reality
Augmented realityAugmented reality
Augmented reality
Shubham Pahune
 
Designing the future of Augmented Reality
Designing the future of Augmented RealityDesigning the future of Augmented Reality
Designing the future of Augmented Reality
Carina Ngai
 
Designing Mobile Augmented Reality Art Applications: Addressing the Views of ...
Designing Mobile Augmented Reality Art Applications: Addressing the Views of ...Designing Mobile Augmented Reality Art Applications: Addressing the Views of ...
Designing Mobile Augmented Reality Art Applications: Addressing the Views of ...
University of Central Lancashire
 
Designing for an Augmented Reality world
Designing for an Augmented Reality worldDesigning for an Augmented Reality world
Designing for an Augmented Reality world
thomas.purves
 
Augmented Reality ppt
Augmented Reality pptAugmented Reality ppt
Augmented Reality ppt
Khyati Ganatra
 

Viewers also liked (12)

Aesthetec at MEIC5, augmenting the world
Aesthetec at MEIC5, augmenting the worldAesthetec at MEIC5, augmenting the world
Aesthetec at MEIC5, augmenting the world
 
Designing Augmented Reality Experiences
Designing Augmented Reality ExperiencesDesigning Augmented Reality Experiences
Designing Augmented Reality Experiences
 
Designing Augmented Reality Experiences for Mobile
Designing Augmented Reality Experiences for MobileDesigning Augmented Reality Experiences for Mobile
Designing Augmented Reality Experiences for Mobile
 
2013 Lecture3: AR Tracking
2013 Lecture3: AR Tracking 2013 Lecture3: AR Tracking
2013 Lecture3: AR Tracking
 
Augmentet Reality, Smart Cities - Quo Vadis, Digitalisierung
Augmentet Reality, Smart Cities - Quo Vadis, DigitalisierungAugmentet Reality, Smart Cities - Quo Vadis, Digitalisierung
Augmentet Reality, Smart Cities - Quo Vadis, Digitalisierung
 
Experience Design for Mobile Augmented Reality
Experience Design for Mobile Augmented RealityExperience Design for Mobile Augmented Reality
Experience Design for Mobile Augmented Reality
 
Designing Outstanding AR Experiences
Designing Outstanding AR ExperiencesDesigning Outstanding AR Experiences
Designing Outstanding AR Experiences
 
Augmented reality
Augmented realityAugmented reality
Augmented reality
 
Designing the future of Augmented Reality
Designing the future of Augmented RealityDesigning the future of Augmented Reality
Designing the future of Augmented Reality
 
Designing Mobile Augmented Reality Art Applications: Addressing the Views of ...
Designing Mobile Augmented Reality Art Applications: Addressing the Views of ...Designing Mobile Augmented Reality Art Applications: Addressing the Views of ...
Designing Mobile Augmented Reality Art Applications: Addressing the Views of ...
 
Designing for an Augmented Reality world
Designing for an Augmented Reality worldDesigning for an Augmented Reality world
Designing for an Augmented Reality world
 
Augmented Reality ppt
Augmented Reality pptAugmented Reality ppt
Augmented Reality ppt
 

Similar to 426 Lecture 9: Research Directions in AR

Mobile AR Lecture 10 - Research Directions
Mobile AR Lecture 10 - Research DirectionsMobile AR Lecture 10 - Research Directions
Mobile AR Lecture 10 - Research Directions
Mark Billinghurst
 
Natural Interaction for Augmented Reality Applications
Natural Interaction for Augmented Reality ApplicationsNatural Interaction for Augmented Reality Applications
Natural Interaction for Augmented Reality Applications
Mark Billinghurst
 
COSC 426 Lect. 8: AR Research Directions
COSC 426 Lect. 8: AR Research DirectionsCOSC 426 Lect. 8: AR Research Directions
COSC 426 Lect. 8: AR Research Directions
Mark Billinghurst
 
2016 AR Summer School - Lecture 5
2016 AR Summer School - Lecture 52016 AR Summer School - Lecture 5
2016 AR Summer School - Lecture 5
Mark Billinghurst
 
1 track kinect@Bicocca - intro
1   track kinect@Bicocca - intro1   track kinect@Bicocca - intro
1 track kinect@Bicocca - introMatteo Valoriani
 
Grand Challenges for Mixed Reality
Grand Challenges for Mixed Reality Grand Challenges for Mixed Reality
Grand Challenges for Mixed Reality
Mark Billinghurst
 
Lightweight Concurrency
Lightweight ConcurrencyLightweight Concurrency
Lightweight Concurrency
Andreas Heil
 
Kinect sensor
Kinect sensorKinect sensor
Kinect sensor
bhoomit morkar
 
1.pdf
1.pdf1.pdf
1.pdf
Tony Creat
 
MIT 6.870 - Template Matching and Histograms (Nicolas Pinto, MIT)
MIT 6.870 - Template Matching and Histograms (Nicolas Pinto, MIT)MIT 6.870 - Template Matching and Histograms (Nicolas Pinto, MIT)
MIT 6.870 - Template Matching and Histograms (Nicolas Pinto, MIT)
npinto
 
A NOVAL ARTECHTURE FOR 3D MODEL IN VIRTUAL COMMUNITIES FROM FACE DETECTION
A NOVAL ARTECHTURE FOR 3D MODEL IN VIRTUAL COMMUNITIES FROM FACE DETECTIONA NOVAL ARTECHTURE FOR 3D MODEL IN VIRTUAL COMMUNITIES FROM FACE DETECTION
A NOVAL ARTECHTURE FOR 3D MODEL IN VIRTUAL COMMUNITIES FROM FACE DETECTIONIJASCSE
 
Alvaro Cassinelli / Meta Perception Group leader
Alvaro Cassinelli / Meta Perception Group leaderAlvaro Cassinelli / Meta Perception Group leader
Alvaro Cassinelli / Meta Perception Group leader
School of Creative Media, City University, Hong KOng
 
Mit6870 template matching and histograms
Mit6870 template matching and histogramsMit6870 template matching and histograms
Mit6870 template matching and histogramszukun
 
TOPIC 10-FUTURE ICT TRENDS.pptx
TOPIC 10-FUTURE ICT TRENDS.pptxTOPIC 10-FUTURE ICT TRENDS.pptx
TOPIC 10-FUTURE ICT TRENDS.pptx
NMohd3
 
Comp4010 Lecture8 Introduction to VR
Comp4010 Lecture8 Introduction to VRComp4010 Lecture8 Introduction to VR
Comp4010 Lecture8 Introduction to VR
Mark Billinghurst
 
Future of Mobile Augmented Reality (Zenitum's View Point)
Future of Mobile Augmented Reality (Zenitum's View Point)Future of Mobile Augmented Reality (Zenitum's View Point)
Future of Mobile Augmented Reality (Zenitum's View Point)
DoubleMe, Inc.
 
Sixth sense
Sixth senseSixth sense
Sixth sense
Deevena Dayaal
 
Object modeling in robotic perception
Object modeling in robotic perceptionObject modeling in robotic perception
Object modeling in robotic perception
MoniqueO Opris
 
Suman
SumanSuman

Similar to 426 Lecture 9: Research Directions in AR (20)

Mobile AR Lecture 10 - Research Directions
Mobile AR Lecture 10 - Research DirectionsMobile AR Lecture 10 - Research Directions
Mobile AR Lecture 10 - Research Directions
 
Natural Interaction for Augmented Reality Applications
Natural Interaction for Augmented Reality ApplicationsNatural Interaction for Augmented Reality Applications
Natural Interaction for Augmented Reality Applications
 
COSC 426 Lect. 8: AR Research Directions
COSC 426 Lect. 8: AR Research DirectionsCOSC 426 Lect. 8: AR Research Directions
COSC 426 Lect. 8: AR Research Directions
 
2016 AR Summer School - Lecture 5
2016 AR Summer School - Lecture 52016 AR Summer School - Lecture 5
2016 AR Summer School - Lecture 5
 
Kinect
KinectKinect
Kinect
 
1 track kinect@Bicocca - intro
1   track kinect@Bicocca - intro1   track kinect@Bicocca - intro
1 track kinect@Bicocca - intro
 
Grand Challenges for Mixed Reality
Grand Challenges for Mixed Reality Grand Challenges for Mixed Reality
Grand Challenges for Mixed Reality
 
Lightweight Concurrency
Lightweight ConcurrencyLightweight Concurrency
Lightweight Concurrency
 
Kinect sensor
Kinect sensorKinect sensor
Kinect sensor
 
1.pdf
1.pdf1.pdf
1.pdf
 
MIT 6.870 - Template Matching and Histograms (Nicolas Pinto, MIT)
MIT 6.870 - Template Matching and Histograms (Nicolas Pinto, MIT)MIT 6.870 - Template Matching and Histograms (Nicolas Pinto, MIT)
MIT 6.870 - Template Matching and Histograms (Nicolas Pinto, MIT)
 
A NOVAL ARTECHTURE FOR 3D MODEL IN VIRTUAL COMMUNITIES FROM FACE DETECTION
A NOVAL ARTECHTURE FOR 3D MODEL IN VIRTUAL COMMUNITIES FROM FACE DETECTIONA NOVAL ARTECHTURE FOR 3D MODEL IN VIRTUAL COMMUNITIES FROM FACE DETECTION
A NOVAL ARTECHTURE FOR 3D MODEL IN VIRTUAL COMMUNITIES FROM FACE DETECTION
 
Alvaro Cassinelli / Meta Perception Group leader
Alvaro Cassinelli / Meta Perception Group leaderAlvaro Cassinelli / Meta Perception Group leader
Alvaro Cassinelli / Meta Perception Group leader
 
Mit6870 template matching and histograms
Mit6870 template matching and histogramsMit6870 template matching and histograms
Mit6870 template matching and histograms
 
TOPIC 10-FUTURE ICT TRENDS.pptx
TOPIC 10-FUTURE ICT TRENDS.pptxTOPIC 10-FUTURE ICT TRENDS.pptx
TOPIC 10-FUTURE ICT TRENDS.pptx
 
Comp4010 Lecture8 Introduction to VR
Comp4010 Lecture8 Introduction to VRComp4010 Lecture8 Introduction to VR
Comp4010 Lecture8 Introduction to VR
 
Future of Mobile Augmented Reality (Zenitum's View Point)
Future of Mobile Augmented Reality (Zenitum's View Point)Future of Mobile Augmented Reality (Zenitum's View Point)
Future of Mobile Augmented Reality (Zenitum's View Point)
 
Sixth sense
Sixth senseSixth sense
Sixth sense
 
Object modeling in robotic perception
Object modeling in robotic perceptionObject modeling in robotic perception
Object modeling in robotic perception
 
Suman
SumanSuman
Suman
 

More from Mark Billinghurst

The Metaverse: Are We There Yet?
The  Metaverse:    Are   We  There  Yet?The  Metaverse:    Are   We  There  Yet?
The Metaverse: Are We There Yet?
Mark Billinghurst
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR Systems
Mark Billinghurst
 
IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024
Mark Billinghurst
 
Future Research Directions for Augmented Reality
Future Research Directions for Augmented RealityFuture Research Directions for Augmented Reality
Future Research Directions for Augmented Reality
Mark Billinghurst
 
Evaluation Methods for Social XR Experiences
Evaluation Methods for Social XR ExperiencesEvaluation Methods for Social XR Experiences
Evaluation Methods for Social XR Experiences
Mark Billinghurst
 
Empathic Computing: Delivering the Potential of the Metaverse
Empathic Computing: Delivering  the Potential of the MetaverseEmpathic Computing: Delivering  the Potential of the Metaverse
Empathic Computing: Delivering the Potential of the Metaverse
Mark Billinghurst
 
Empathic Computing: Capturing the Potential of the Metaverse
Empathic Computing: Capturing the Potential of the MetaverseEmpathic Computing: Capturing the Potential of the Metaverse
Empathic Computing: Capturing the Potential of the Metaverse
Mark Billinghurst
 
Talk to Me: Using Virtual Avatars to Improve Remote Collaboration
Talk to Me: Using Virtual Avatars to Improve Remote CollaborationTalk to Me: Using Virtual Avatars to Improve Remote Collaboration
Talk to Me: Using Virtual Avatars to Improve Remote Collaboration
Mark Billinghurst
 
Empathic Computing: Designing for the Broader Metaverse
Empathic Computing: Designing for the Broader MetaverseEmpathic Computing: Designing for the Broader Metaverse
Empathic Computing: Designing for the Broader Metaverse
Mark Billinghurst
 
2022 COMP 4010 Lecture 7: Introduction to VR
2022 COMP 4010 Lecture 7: Introduction to VR2022 COMP 4010 Lecture 7: Introduction to VR
2022 COMP 4010 Lecture 7: Introduction to VR
Mark Billinghurst
 
2022 COMP4010 Lecture 6: Designing AR Systems
2022 COMP4010 Lecture 6: Designing AR Systems2022 COMP4010 Lecture 6: Designing AR Systems
2022 COMP4010 Lecture 6: Designing AR Systems
Mark Billinghurst
 
ISS2022 Keynote
ISS2022 KeynoteISS2022 Keynote
ISS2022 Keynote
Mark Billinghurst
 
Novel Interfaces for AR Systems
Novel Interfaces for AR SystemsNovel Interfaces for AR Systems
Novel Interfaces for AR Systems
Mark Billinghurst
 
2022 COMP4010 Lecture5: AR Prototyping
2022 COMP4010 Lecture5: AR Prototyping2022 COMP4010 Lecture5: AR Prototyping
2022 COMP4010 Lecture5: AR Prototyping
Mark Billinghurst
 
2022 COMP4010 Lecture4: AR Interaction
2022 COMP4010 Lecture4: AR Interaction2022 COMP4010 Lecture4: AR Interaction
2022 COMP4010 Lecture4: AR Interaction
Mark Billinghurst
 
2022 COMP4010 Lecture3: AR Technology
2022 COMP4010 Lecture3: AR Technology2022 COMP4010 Lecture3: AR Technology
2022 COMP4010 Lecture3: AR Technology
Mark Billinghurst
 
2022 COMP4010 Lecture2: Perception
2022 COMP4010 Lecture2: Perception2022 COMP4010 Lecture2: Perception
2022 COMP4010 Lecture2: Perception
Mark Billinghurst
 
2022 COMP4010 Lecture1: Introduction to XR
2022 COMP4010 Lecture1: Introduction to XR2022 COMP4010 Lecture1: Introduction to XR
2022 COMP4010 Lecture1: Introduction to XR
Mark Billinghurst
 
Empathic Computing and Collaborative Immersive Analytics
Empathic Computing and Collaborative Immersive AnalyticsEmpathic Computing and Collaborative Immersive Analytics
Empathic Computing and Collaborative Immersive Analytics
Mark Billinghurst
 
Metaverse Learning
Metaverse LearningMetaverse Learning
Metaverse Learning
Mark Billinghurst
 

More from Mark Billinghurst (20)

The Metaverse: Are We There Yet?
The  Metaverse:    Are   We  There  Yet?The  Metaverse:    Are   We  There  Yet?
The Metaverse: Are We There Yet?
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR Systems
 
IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024
 
Future Research Directions for Augmented Reality
Future Research Directions for Augmented RealityFuture Research Directions for Augmented Reality
Future Research Directions for Augmented Reality
 
Evaluation Methods for Social XR Experiences
Evaluation Methods for Social XR ExperiencesEvaluation Methods for Social XR Experiences
Evaluation Methods for Social XR Experiences
 
Empathic Computing: Delivering the Potential of the Metaverse
Empathic Computing: Delivering  the Potential of the MetaverseEmpathic Computing: Delivering  the Potential of the Metaverse
Empathic Computing: Delivering the Potential of the Metaverse
 
Empathic Computing: Capturing the Potential of the Metaverse
Empathic Computing: Capturing the Potential of the MetaverseEmpathic Computing: Capturing the Potential of the Metaverse
Empathic Computing: Capturing the Potential of the Metaverse
 
Talk to Me: Using Virtual Avatars to Improve Remote Collaboration
Talk to Me: Using Virtual Avatars to Improve Remote CollaborationTalk to Me: Using Virtual Avatars to Improve Remote Collaboration
Talk to Me: Using Virtual Avatars to Improve Remote Collaboration
 
Empathic Computing: Designing for the Broader Metaverse
Empathic Computing: Designing for the Broader MetaverseEmpathic Computing: Designing for the Broader Metaverse
Empathic Computing: Designing for the Broader Metaverse
 
2022 COMP 4010 Lecture 7: Introduction to VR
2022 COMP 4010 Lecture 7: Introduction to VR2022 COMP 4010 Lecture 7: Introduction to VR
2022 COMP 4010 Lecture 7: Introduction to VR
 
2022 COMP4010 Lecture 6: Designing AR Systems
2022 COMP4010 Lecture 6: Designing AR Systems2022 COMP4010 Lecture 6: Designing AR Systems
2022 COMP4010 Lecture 6: Designing AR Systems
 
ISS2022 Keynote
ISS2022 KeynoteISS2022 Keynote
ISS2022 Keynote
 
Novel Interfaces for AR Systems
Novel Interfaces for AR SystemsNovel Interfaces for AR Systems
Novel Interfaces for AR Systems
 
2022 COMP4010 Lecture5: AR Prototyping
2022 COMP4010 Lecture5: AR Prototyping2022 COMP4010 Lecture5: AR Prototyping
2022 COMP4010 Lecture5: AR Prototyping
 
2022 COMP4010 Lecture4: AR Interaction
2022 COMP4010 Lecture4: AR Interaction2022 COMP4010 Lecture4: AR Interaction
2022 COMP4010 Lecture4: AR Interaction
 
2022 COMP4010 Lecture3: AR Technology
2022 COMP4010 Lecture3: AR Technology2022 COMP4010 Lecture3: AR Technology
2022 COMP4010 Lecture3: AR Technology
 
2022 COMP4010 Lecture2: Perception
2022 COMP4010 Lecture2: Perception2022 COMP4010 Lecture2: Perception
2022 COMP4010 Lecture2: Perception
 
2022 COMP4010 Lecture1: Introduction to XR
2022 COMP4010 Lecture1: Introduction to XR2022 COMP4010 Lecture1: Introduction to XR
2022 COMP4010 Lecture1: Introduction to XR
 
Empathic Computing and Collaborative Immersive Analytics
Empathic Computing and Collaborative Immersive AnalyticsEmpathic Computing and Collaborative Immersive Analytics
Empathic Computing and Collaborative Immersive Analytics
 
Metaverse Learning
Metaverse LearningMetaverse Learning
Metaverse Learning
 

Recently uploaded

State of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 previewState of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
Prayukth K V
 
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdfFIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance
 
Mission to Decommission: Importance of Decommissioning Products to Increase E...
Mission to Decommission: Importance of Decommissioning Products to Increase E...Mission to Decommission: Importance of Decommissioning Products to Increase E...
Mission to Decommission: Importance of Decommissioning Products to Increase E...
Product School
 
Connector Corner: Automate dynamic content and events by pushing a button
Connector Corner: Automate dynamic content and events by pushing a buttonConnector Corner: Automate dynamic content and events by pushing a button
Connector Corner: Automate dynamic content and events by pushing a button
DianaGray10
 
Monitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR EventsMonitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR Events
Ana-Maria Mihalceanu
 
Generating a custom Ruby SDK for your web service or Rails API using Smithy
Generating a custom Ruby SDK for your web service or Rails API using SmithyGenerating a custom Ruby SDK for your web service or Rails API using Smithy
Generating a custom Ruby SDK for your web service or Rails API using Smithy
g2nightmarescribd
 
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...
Ramesh Iyer
 
The Future of Platform Engineering
The Future of Platform EngineeringThe Future of Platform Engineering
The Future of Platform Engineering
Jemma Hussein Allen
 
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
Product School
 
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
Product School
 
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdfFIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance
 
Leading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdfLeading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdf
OnBoard
 
FIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdfFIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance
 
Securing your Kubernetes cluster_ a step-by-step guide to success !
Securing your Kubernetes cluster_ a step-by-step guide to success !Securing your Kubernetes cluster_ a step-by-step guide to success !
Securing your Kubernetes cluster_ a step-by-step guide to success !
KatiaHIMEUR1
 
How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...
Product School
 
Assuring Contact Center Experiences for Your Customers With ThousandEyes
Assuring Contact Center Experiences for Your Customers With ThousandEyesAssuring Contact Center Experiences for Your Customers With ThousandEyes
Assuring Contact Center Experiences for Your Customers With ThousandEyes
ThousandEyes
 
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdfSmart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
91mobiles
 
DevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA ConnectDevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA Connect
Kari Kakkonen
 
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
UiPathCommunity
 
GraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge GraphGraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge Graph
Guy Korland
 

Recently uploaded (20)

State of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 previewState of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
 
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdfFIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
 
Mission to Decommission: Importance of Decommissioning Products to Increase E...
Mission to Decommission: Importance of Decommissioning Products to Increase E...Mission to Decommission: Importance of Decommissioning Products to Increase E...
Mission to Decommission: Importance of Decommissioning Products to Increase E...
 
Connector Corner: Automate dynamic content and events by pushing a button
Connector Corner: Automate dynamic content and events by pushing a buttonConnector Corner: Automate dynamic content and events by pushing a button
Connector Corner: Automate dynamic content and events by pushing a button
 
Monitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR EventsMonitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR Events
 
Generating a custom Ruby SDK for your web service or Rails API using Smithy
Generating a custom Ruby SDK for your web service or Rails API using SmithyGenerating a custom Ruby SDK for your web service or Rails API using Smithy
Generating a custom Ruby SDK for your web service or Rails API using Smithy
 
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...
 
The Future of Platform Engineering
The Future of Platform EngineeringThe Future of Platform Engineering
The Future of Platform Engineering
 
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
 
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
 
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdfFIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
 
Leading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdfLeading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdf
 
FIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdfFIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdf
 
Securing your Kubernetes cluster_ a step-by-step guide to success !
Securing your Kubernetes cluster_ a step-by-step guide to success !Securing your Kubernetes cluster_ a step-by-step guide to success !
Securing your Kubernetes cluster_ a step-by-step guide to success !
 
How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...
 
Assuring Contact Center Experiences for Your Customers With ThousandEyes
Assuring Contact Center Experiences for Your Customers With ThousandEyesAssuring Contact Center Experiences for Your Customers With ThousandEyes
Assuring Contact Center Experiences for Your Customers With ThousandEyes
 
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdfSmart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
 
DevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA ConnectDevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA Connect
 
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
 
GraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge GraphGraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge Graph
 

426 Lecture 9: Research Directions in AR

  • 1. COSC 426: Augmented Reality Mark Billinghurst mark.billinghurst@hitlabnz.org Sept 19th 2012 Lecture 9: AR Research Directions
  • 2. Looking to the Future
  • 3. The Future is with us It takes at least 20 years for new technologies to go from the lab to the lounge.. “The technologies that will significantly affect our lives over the next 10 years have been around for a decade. The future is with us.The trick is learning how to spot it.The commercialization of research, in other words, is far more about Oct 11th 2004 prospecting than alchemy.” Bill Buxton
  • 4. Research Directions experiences Usability applications Interaction tools Authoring components Tracking, Display Sony CSL © 2004
  • 5. Research Directions   Components   Markerless tracking, hybrid tracking   Displays, input devices   Tools   Authoring tools, user generated content   Applications   Interaction techniques/metaphors   Experiences   User evaluation, novel AR/MR experiences
  • 7. Occlusion with See-through HMD   The Problem   Occluding real objects with virtual   Occluding virtual objects with real Real Scene Current See-through HMD
  • 8. ELMO (Kiyokawa 2001)   Occlusive see-through HMD   Masking LCD   Real time range finding
  • 10. ELMO Design Virtual images from LCD Depth Sensing LCD Mask Real World Optical Combiner   Use LCD mask to block real world   Depth sensing for occluding virtual images
  • 12. Future Displays   Always on, unobtrusive
  • 14. Contact Lens Display   Babak Parviz   University Washington   MEMS components   Transparent elements   Micro-sensors   Challenges   Miniaturization   Assembly   Eye-safe
  • 17. Interaction Techniques   Input techniques   3D vs. 2D input   Pen/buttons/gestures   Natural Interaction   Speech + gesture input   Intelligent Interfaces   Artificial agents   Context sensing
  • 18. Flexible Displays   Flexible Lens Surface   Bimanual interaction   Digital paper analogy Red Planet, 2000
  • 19. Sony CSL © 2004
  • 20. Sony CSL © 2004
  • 21. Tangible User Interfaces (TUIs)   GUMMI bendable display prototype   Reproduced by permission of Sony CSL
  • 22. Sony CSL © 2004
  • 23. Sony CSL © 2004
  • 24. Lucid Touch   Microsoft Research & Mitsubishi Electric Research Labs   Wigdor, D., Forlines, C., Baudisch, P., Barnwell, J., Shen, C. LucidTouch: A See-Through Mobile Device In Proceedings of UIST 2007, Newport, Rhode Island, October 7-10, 2007, pp. 269–278.
  • 25.
  • 26. Auditory Modalities   Auditory   auditory icons   earcons   speech synthesis/recognition   Nomadic Radio (Sawhney) -  combines spatialized audio -  auditory cues -  speech synthesis/recognition
  • 27. Gestural interfaces   1. Micro-gestures   (unistroke, smartPad)   2. Device-based gestures   (tilt based examples)   3. Embodied interaction   (eye toy)
  • 28. Natural Gesture Interaction on Mobile   Use mobile camera for hand tracking   Fingertip detection
  • 29. Evaluation   Gesture input more than twice as slow as touch   No difference in naturalness
  • 30. Haptic Modalities   Haptic interfaces   Simple uses in mobiles? (vibration instead of ringtone)   Sony’s Touchengine -  physiological experiments show you can perceive two stimulus 5ms apart, and spaced as low as 0.2 microns 4 µm n層 28 µm n層 V
  • 31. Haptic Input   AR Haptic Workbench   CSIRO 2003 – Adcock et. al.
  • 32. AR Haptic Interface   Phantom, ARToolKit, Magellan
  • 35. To Make the Vision Real..   Hardware/software requirements   Contact lens displays   Free space hand/body tracking   Environment recognition   Speech/gesture recognition   Etc..
  • 36. Natural Interaction   Automatically detecting real environment   Environmental awareness   Physically based interaction   Gesture Input   Free-hand interaction   Multimodal Input   Speech and gesture interaction   Implicit rather than Explicit interaction
  • 38. AR MicroMachines   AR experience with environment awareness and physically-based interaction   Based on MS Kinect RGB-D sensor   Augmented environment supports   occlusion, shadows   physically-based interaction between real and virtual objects
  • 40. Architecture   Our framework uses five libraries:   OpenNI   OpenCV   OPIRA   Bullet Physics   OpenSceneGraph
  • 41. System Flow   The system flow consists of three sections:   Image Processing and Marker Tracking   Physics Simulation   Rendering
  • 42. Physics Simulation   Create virtual mesh over real world   Update at 10 fps – can move real objects   Use by physics engine for collision detection (virtual/real)   Use by OpenScenegraph for occlusion and shadows
  • 45. Mo#va#on   AR  MicroMachines  and  PhobiAR   •     Treated  the  environment  as            sta/c  –  no  tracking   •     Tracked  objects  in  2D   More  realis#c  interac#on  requires  3D  gesture  tracking      
  • 46. Mo#va#on   Occlusion  Issues   AR  MicroMachines  only  achieved  realis/c  occlusion  because  the  user’s  viewpoint  matched  the  Kinect’s   Proper  occlusion  requires  a  more  complete  model  of  scene  objects  
  • 47. HITLabNZ’s Gesture Library Architecture 5. Gesture •  Static Gestures •  Dynamic Gestures •  Context based Gestures 4. Modeling •  Hand recognition/modeling •  Rigid-body modeling 3. Classification/Tracking 2. Segmentation 1. Hardware Interface
  • 48. HITLabNZ’s Gesture Library Architecture 5. Gesture o  Supports PCL, OpenNI, OpenCV, and Kinect SDK. o  Provides access to depth, RGB, XYZRGB. •  Static Gestures o  Usage: Capturing color image, depth image and concatenated •  Dynamic Gestures point clouds from a single or multiple cameras o  For example: •  Context based Gestures 4. Modeling •  Hand recognition/ modeling Kinect for Xbox 360 •  Rigid-body modeling 3. Classification/Tracking Kinect for Windows 2. Segmentation Asus Xtion Pro Live 1. Hardware Interface
  • 49. HITLabNZ’s Gesture Library Architecture 5. Gesture o  Segment images and point clouds based on color, depth and space. •  Static Gestures o  Usage: Segmenting images or point clouds using color •  Dynamic Gestures models, depth, or spatial properties such as location, shape and size. •  Context based Gestures o  For example: 4. Modeling •  Hand recognition/ modeling •  Rigid-body modeling Skin color segmentation 3. Classification/Tracking 2. Segmentation Depth threshold 1. Hardware Interface
  • 50. HITLabNZ’s Gesture Library Architecture 5. Gesture o  Identify and track objects between frames based on XYZRGB. •  Static Gestures o  Usage: Identifying current position/orientation of the tracked •  Dynamic Gestures object in space. •  Context based Gestures o  For example: 4. Modeling •  Hand recognition/ Training set of hand modeling poses, colors •  Rigid-body modeling represent unique regions of the hand. 3. Classification/Tracking 2. Segmentation Raw output (without- cleaning) classified on real hand input 1. Hardware Interface (depth image).
  • 51. HITLabNZ’s Gesture Library Architecture 5. Gesture o  Hand Recognition/Modeling   Skeleton based (for low resolution •  Static Gestures approximation) •  Dynamic Gestures   Model based (for more accurate •  Context based Gestures representation) o  Object Modeling (identification and tracking rigid- 4. Modeling body objects) •  Hand recognition/ o  Physical Modeling (physical interaction) modeling   Sphere Proxy •  Rigid-body modeling   Model based   Mesh based 3. Classification/Tracking o  Usage: For general spatial interaction in AR/VR environment 2. Segmentation 1. Hardware Interface
  • 52. Method   Represent  models  as  collec#ons  of  spheres  moving  with  the   models  in  the  Bullet  physics  engine  
  • 53. Method   Render  AR  scene  with  OpenSceneGraph,  using  depth  map   for  occlusion   Shadows  yet  to  be  implemented  
  • 55. HITLabNZ’s Gesture Library Architecture 5. Gesture o  Static (hand pose recognition) o  Dynamic (meaningful movement recognition) •  Static Gestures o  Context-based gesture recognition (gestures with context, •  Dynamic Gestures e.g. pointing) o  Usage: Issuing commands/anticipating user intention and high •  Context based Gestures level interaction. 4. Modeling •  Hand recognition/ modeling •  Rigid-body modeling 3. Classification/Tracking 2. Segmentation 1. Hardware Interface
  • 57. Multimodal Interaction   Combined speech input   Gesture and Speech complimentary   Speech -  modal commands, quantities   Gesture -  selection, motion, qualities   Previous work found multimodal interfaces intuitive for 2D/3D graphics interaction
  • 58. 1. Marker Based Multimodal Interface   Add speech recognition to VOMAR   Paddle + speech commands
  • 59.
  • 60. Commands Recognized   Create Command "Make a blue chair": to create a virtual object and place it on the paddle.   Duplicate Command "Copy this": to duplicate a virtual object and place it on the paddle.   Grab Command "Grab table": to select a virtual object and place it on the paddle.   Place Command "Place here": to place the attached object in the workspace.   Move Command "Move the couch": to attach a virtual object in the workspace to the paddle so that it follows the paddle movement.
  • 62. Object Relationships "Put chair behind the table” Where is behind? View specific regions
  • 63. User Evaluation   Performance time   Speech + static paddle significantly faster   Gesture-only condition less accurate for position/orientation   Users preferred speech + paddle input
  • 65. 2. Free Hand Multimodal Input   Use free hand to interact with AR content   Recognize simple gestures   No marker tracking Point Move Pick/Drop
  • 69. User Evaluation   Change object shape, colour and position   Conditions   Speech only, gesture only, multimodal   Measure   performance time, error, subjective survey
  • 71. Results   Average performance time (MMI, speech fastest)   Gesture: 15.44s   Speech: 12.38s   Multimodal: 11.78s   No difference in user errors   User subjective survey   Q1: How natural was it to manipulate the object? -  MMI, speech significantly better   70% preferred MMI, 25% speech only, 5% gesture only
  • 73. Intelligent Interfaces   Most AR systems stupid   Don’t recognize user behaviour   Don’t provide feedback   Don’t adapt to user   Especially important for training   Scaffolded learning   Moving beyond check-lists of actions
  • 74. Intelligent Interfaces   AR interface + intelligent tutoring system   ASPIRE constraint based system (from UC)   Constraints -  relevance cond., satisfaction cond., feedback
  • 76. Intelligent Feedback   Actively monitors user behaviour   Implicit vs. explicit interaction   Provides corrective feedback
  • 77.
  • 78. Evaluation Results   16 subjects, with and without ITS   Improved task completion   Improved learning
  • 79. Intelligent Agents   AR characters   Virtual embodiment of system   Multimodal input/output   Examples   AR Lego, Welbo, etc   Mr Virtuoso -  AR character more real, more fun -  On-screen 3D and AR similar in usefulness
  • 81. Context Sensing   TKK Project   Using context to manage information   Context from   Speech   Gaze   Real world   AR Display
  • 82.
  • 83.
  • 84.
  • 89. Novel Experiences   Crossing Boundaries   Ubiquitous VR/AR   Collaborative Experiences   Massive AR   AR + Social Networking   Usability
  • 90. Crossing Boundaries Jun Rekimoto, Sony CSL
  • 91. Invisible Interfaces Jun Rekimoto, Sony CSL
  • 92. Milgram’s Reality-Virtuality continuum Mixed Reality Real Augmented Augmented Virtual Environment Reality (AR) Virtuality (AV) Environment Reality - Virtuality (RV) Continuum
  • 93. The MagicBook Reality Augmented Augmented Virtuality Reality (AR) Virtuality (AV)
  • 94. Invisible Interfaces Jun Rekimoto, Sony CSL
  • 95. Example: Visualizing Sensor Networks   Rauhala et. al. 2007 (Linkoping)   Network of Humidity Sensors   ZigBee wireless communication   Use Mobile AR to Visualize Humidity
  • 96.
  • 97.
  • 98. Invisible Interfaces Jun Rekimoto, Sony CSL
  • 99. UbiVR – CAMAR CAMAR Controller CAMAR Viewer CAMAR Companion GIST - Korea
  • 100. ubiHome @ GIST Media services Light service MR window ubiTrack Where/When Tag-it ubiKey ©ubiHome Who/What/ What/When/How When/How PDA Couch Sensor Door Sensor Who/What/When/How When/How When/How
  • 101. CAMAR - GIST (CAMAR: Context-Aware Mobile Augmented Reality)
  • 102.   UCAM: Architecture wear-UCAM Content Sensor Service (Integrator,Manager, Interpreter,ServiceProvider) Context Interface Network Interface ubi-UCAM BAN/PAN TCP/IP (BT) (Discovery,Control,Event) Operating System vr-UCAM
  • 103. Hybrid User Inerfaces Goal: To incorporate AR into normal meeting environment   Physical Components   Real props   Display Elements   2D and 3D (AR) displays   Interaction Metaphor   Use multiple tools – each relevant for the task
  • 104. Hybrid User Interfaces 1 2 3 4 PERSONAL TABLETOP WHITEBOARD MULTIGROUP Private Display Private Display Private Display Private Display Group Display Public Display Group Display Public Display
  • 105. Ubiquitous UbiComp Ubi AR Ubi VR Weiser Mobile AR Desktop AR VR Terminal Reality Virtual Reality Milgram From: Joe Newman
  • 106. Massive Multi User Ubiquitous r Weise Terminal Single User Reality Milg ram VR
  • 108. AR Client   HMD and HHD   Showing virtual images over real world   Images drawn by remote expert   Local interaction
  • 109. Shared Visual Context (Fussell ,1999)   Remote video collaboration   Shared manual, video viewing   Compared Video, Audio, Side-by-side collaboration   Communication analysis
  • 110. WACL(Kurata,2004)   Wearable Camera/Laser Pointer   Independent pointer control   Remote panorama view
  • 111. WACL(Kurata,2004)   Remote Expert View   Panorama viewing, annotation, image capture
  • 112. As If Being There (Poelman, 2012)   AR + Scene Capture   HMD viewing, remote expert   Gesture input   Scene capture (PTAM), stereo camera
  • 113. As If Being There (Poelman, 2012)   Gesture Interaction   Hand postures recognized   Menu superimposed on hands
  • 114. Real World Capture   Using Kinect for 3D Scene Capture   Camera tracking   AR overlay   Remote situational awareness
  • 115. Remote scene capture with AR annotations added
  • 116. Future Directions SLIDE 116 Massive Multiuser   Handheld AR for the first time allows extremely high numbers of AR users   Requires   New types of applications/games   New infrastructure (server/client/peer-to-peer)   Content distribution…
  • 117. Massive MultiUser   2D Applications   MSN – 29 million   Skype – 10 million   Facebook – 100m+   3D/VR Applications   SecondLife > 50K   Stereo projection - <500   Augmented Reality   Shared Space (1999) - 4   Invisible Train (2004) - 8
  • 120. Augmented Reality 2.0 Infrastructure
  • 121. Leveraging Web 2.0   Content retrieval using HTTP   XML encoded meta information   KML placemarks + extensions   Queries   Based on location (from GPS, image recognition)   Based on situation (barcode markers)   Queries also deliver tracking feature databases   Everybody can set up an AR 2.0 server   Syndication:   Community servers for end-user content   Tagging   AR client subscribes to arbitrary number of feeds
  • 122. Content   Content creation and delivery   Content creation pipeline   Delivering previously unknown content   Streaming of   Data (objects, multi-media)   Applications   Distribution   How do users learn about all that content?   How do they access it?
  • 123. ARML (AR Markup Language)
  • 124. Scaling Up   AR on a City Scale   Using mobile phone as ubiquitous sensor   MIT Senseable City Lab   http://senseable.mit.edu/
  • 125.
  • 126. WikiCity Rome (Senseable City Lab MIT)
  • 128. AR Research in the HIT Lab NZ   Gesture interaction   Gesture library   Multimodal interaction   Collaborative speech/gesture interfaces   Mobile AR interfaces   Outdoor AR, interaction methods, navigation tools   AR authoring tools   Visual programming for AR   Remote Collaboration   Mobile AR for remote interaction
  • 129. More Information •  Mark Billinghurst –  mark.billinghurst@hitlabnz.org •  Websites –  http://www.hitlabnz.org/ –  http://artoolkit.sourceforge.net/ –  http://www.osgart.org/ –  http://www.hitlabnz.org/wiki/buildAR/