Slides from our ISMAR 2014 tutorial http://stctutorial.icg.tugraz.at/
Abstract:
Head Mounted Displays such as Google Glass and the META have the potential to spur consumer-oriented Optical See-Through Augmented Reality applications. A correct spatial registration of those displays relative to a user’s eye(s) is an essential problem for any HMD-based AR application.
At our ISMAR 2014 tutorial we provide an overview of established and novel approaches for the calibration of those displays (OST calibration) including hands on experience in which participants will calibrate such head mounted displays.
COMP 4010 Lecture 5 on Interaction Design for Virtual Reality. Taught by Gun Lee on August 21st 2018 at the University of South Australia. Slides by Mark Billinghurst
Short lecture on Unity and how to use Unity and SteamVR to create a simple VR scene. Taught by Mark Billinghurst at the University of South Australia on July 30th 2019
COMP 4010 Lecture 5 on Interaction Design for Virtual Reality. Taught by Gun Lee on August 21st 2018 at the University of South Australia. Slides by Mark Billinghurst
Short lecture on Unity and how to use Unity and SteamVR to create a simple VR scene. Taught by Mark Billinghurst at the University of South Australia on July 30th 2019
https://telecombcn-dl.github.io/dlai-2020/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which were previously addressed with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks or Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles of deep learning from both an algorithmic and computational perspectives.
Virtual Retinal Display: their falling cost and rising performanceJeffrey Funk
These slides use concepts from my (Jeff Funk) course entitled analyzing hi-tech opportunities to analyze the increasing economic feasibility of virtual retinal displays. These displays focus light on a person’s retina using LEDs, digital micro-mirrors and lenses, which are all encased in a head-set about the size of glasses. They enable high resolution 3D video images with a large field of view that are far superior to existing displays. Rapid improvements in LEDs and digital micro-mirrors (one type of MEMS) are enabling these displays to experience rapid reductions in cost and improvements in performance.
Weighted Defect Removal Effectiveness: Method and ValueQA1Skip
This is de-blued version of my presentation at Rational Software Conference 2009. An accompanying video (http://www.youtube.com/watch?v=0ZU28Dma6zw&feature=channel_page) demonstrates one method for generating these values with IBM Rational ClearQuest.
COMP4010 Lecture 4 - VR Technology - Visual and Haptic Displays. Lecture about VR visual and haptic display technology. Taught on August 16th 2016 by Mark Billinghurst from the University of South Australia
Object detection is a main role in image processing.the proposed methods detect various multiple object detection using image processing so provide a really to solving the security problem.
Lecture 2 in the COMP 4010 AR/VR class taught at the University of South Australia. This lecture is about VR Presence and Human Perception. Taught by Mark Billinghurst on August 6th 2019.
Lecture 12 in the COMP 4010 course on AR/VR. This lecture was about research directions in AR/VR and in particular display research. This was taught by Mark Billinghurst on September 26th 2021 at the University of South Australia.
A presentation given by Mark Billinghurst on April 21st 2015 at the CHI 2015 conference. This talk presents highlights from the journal paper:
M. Billinghurst, A. Clark, and G. Lee. A Survey
of Augmented Reality, Foundations and
Trends in Human-Computer Interaction.
Vol. 8, No. 1 (2015) 1–202, 2015
Available at :http://www.nowpublishers.com/article/Details/HCI-049
https://telecombcn-dl.github.io/dlai-2020/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which were previously addressed with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks or Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles of deep learning from both an algorithmic and computational perspectives.
Virtual Retinal Display: their falling cost and rising performanceJeffrey Funk
These slides use concepts from my (Jeff Funk) course entitled analyzing hi-tech opportunities to analyze the increasing economic feasibility of virtual retinal displays. These displays focus light on a person’s retina using LEDs, digital micro-mirrors and lenses, which are all encased in a head-set about the size of glasses. They enable high resolution 3D video images with a large field of view that are far superior to existing displays. Rapid improvements in LEDs and digital micro-mirrors (one type of MEMS) are enabling these displays to experience rapid reductions in cost and improvements in performance.
Weighted Defect Removal Effectiveness: Method and ValueQA1Skip
This is de-blued version of my presentation at Rational Software Conference 2009. An accompanying video (http://www.youtube.com/watch?v=0ZU28Dma6zw&feature=channel_page) demonstrates one method for generating these values with IBM Rational ClearQuest.
COMP4010 Lecture 4 - VR Technology - Visual and Haptic Displays. Lecture about VR visual and haptic display technology. Taught on August 16th 2016 by Mark Billinghurst from the University of South Australia
Object detection is a main role in image processing.the proposed methods detect various multiple object detection using image processing so provide a really to solving the security problem.
Lecture 2 in the COMP 4010 AR/VR class taught at the University of South Australia. This lecture is about VR Presence and Human Perception. Taught by Mark Billinghurst on August 6th 2019.
Lecture 12 in the COMP 4010 course on AR/VR. This lecture was about research directions in AR/VR and in particular display research. This was taught by Mark Billinghurst on September 26th 2021 at the University of South Australia.
A presentation given by Mark Billinghurst on April 21st 2015 at the CHI 2015 conference. This talk presents highlights from the journal paper:
M. Billinghurst, A. Clark, and G. Lee. A Survey
of Augmented Reality, Foundations and
Trends in Human-Computer Interaction.
Vol. 8, No. 1 (2015) 1–202, 2015
Available at :http://www.nowpublishers.com/article/Details/HCI-049
Head Mounted Displays: How to realize ultimate AR experiences? YutaItoh
Slides from a talk I gave at:
Wissens-Austausch-Workshop: The 3rd Workshop on Visualisierung großer Datenmengen in der Wissenschaft (VisDa3? WAW), DLR@Oberpfaffenhofen, Germany, Jun. 9 - 10, 2015.
For more detail of my work, see:
http://campar.in.tum.de/Main/YutaItoh
A lecture on research directions in Augmented Reality as part of the COSC 426 class on AR. Taught by Mark Billinghurst from the HIT Lab NZ at the University of Canterbury.
Google Glass is a wearable computer with an optical head-mounted display (OHMD) that is being developed by Google in the Project Glass research and development project.
It includes voice-controlled Android device that resembles a pair of eyeglasses and displays information directly in the user's field of vision.It offers an augmented reality experience by using visual, audio and location-based inputs to provide relevant information.
AWE 2014 - The Glass Class: Designing Wearable InterfacesMark Billinghurst
Tutorial taught at the AWE 2014 conference, by Mark Billinghurst and Rob Lindeman on May 27th 2014. It provides an overview of how to design interfaces for wearable computers, such as Google Glass.
(Slides) UbiREMOTE: Framework for Remotely Controlling Networked Appliances t...Naoki Shibata
Kiyokawa, K., Yamamoto, S., Shibata, N., Yasumoto, K., Ito, M.: UbiREMOTE: Framework for Remotely Controlling Networked Appliances through Interaction with 3D Virtual Space, Proc. of ACM Multimedia Systems 2010 (MMSys2010), pp.271-280, DOI:10.1145/1730836.1730870 (Feb. 2010).
http://ito-lab.naist.jp/mediawiki/images/6/60/100223mmsys.pdf
In this paper, we propose a framework named “UbiREMOTE”for controlling information appliances connected to a home network with a unified and intuitive user interface from a remote place. The UbiREMOTE framework provides users with a way to control appliances in a home through a virtual space drawn on a mobile terminal screen which reflects the latest conditions of the real appliances and the rooms in the home. With UbiREMOTE, a user controls appliances by (1) moving to the front of an appliance, (2) choosing the appliance to control and (3) pushing buttons on the virtual remote controller which imitates the real remote controller for the appliance or the real console. In this paper, we propose a method to improve the drawing speed of 3D virtual space on mobile terminals and a method for automatically reflecting condition changes of the real space in the virtual space. We implemented the methods and evaluated the performance. The results showed that the proposed methods can be practically used on small mobile terminals.
Attention Approximation: From the web to multi-screen televisionCaroline Jay
The move towards the provision of television content over two or more screens represents an enormous opportunity and a considerable challenge. A scientific understanding of what causes people to switch attention between the main screen and a 'second screen' mobile device during television viewing is key to the development of this technology. This seminar describes how ‘attention approximation’, a technique we have used to model visual attention and design screen reader presentation of Web content, can be used to investigate viewing behaviour, and ultimately drive the provision of content across multiple screens.
Depth-Based Real Time Head Motion Tracking Using 3D Template Matching愚 屠
In this work, we propose a system to estimate head poses only using depth information in real-time. An optimization method based on 3D model fitting is developed. We iteratively minimize the distance between source and target point clouds of a user’s head. The method give fully real-time responses (30fps) without the GPU speedup. We adopt a commodity depth sensor named Microsoft Kinect as well as Asus Xtion, and use the depth image as the only input so that our system will not be affected by illumination variations. However, the simplicity of this acquisition device comes at the cost of frequent noises in the acquired data. We demonstrate that 6 degrees of freedom real-time head motion tracking in 3D space can be achieved with such noisy depth data.
Adaptive Disparity Estimation for Auto Convergence of Region of Interest in a...ijcga
Recently, various devices for three-dimensional (3-D) effect have been developed. For producing 3-D effect of the scene or the region of interest (ROI), disparity should be accurately estimated. People watching 3-D video feel visual fatigue if magnitude of parallax for the ROI is excessively large because a convergence point is not accurately put on the ROI. For producing 3-D effect, a 3-D formatter overlaps left and right images by shifting horizontally the right image by the estimated disparity of the ROI. In this paper, an adaptive disparity estimation algorithm for auto convergence of the ROI in a video is proposed using the first-order Taylor series expansion of disparity and adaptive disparity search range prediction in a stereoscopic video. First, a stereo video that consists of a number of pairs of left and right images is captured in parallel stereo camera configuration. A window in each frame is selected within the ROI and tracked. Then, for automatically adjusting a convergence point on the ROI, two steps are needed with the previously estimated disparities. The first-order Taylor series expansion is used to approximate disparity of the current frame of a video. Then, a moving average filter is used to adaptively determine disparity search range in similarity measure computation. Subjective evaluation such as visual fatigue, comfort, and 3-D effect of the proposed algorithm and existing algorithms is performed. Experimental results with four test videos and subjective evaluation show that the proposed algorithm gives 3-D effect with visual comfort.
Screencasting and Presenting for EngineersKunal Johar
Engineers often think about the 'how' as the most exciting part of their work. These details often bore what would be candid listeners.
Take a step back, think about what excites others, then ease in your grand challenges. Tie it all together in a story.
Improve your website user experience with eye trackingCleverwood Belgium
Developping a great website, software or product is not easy. You have to make sure that your customers understand how it works & enjoy working with it. This is what we call a good User eXperience (UX).
Mesuring the UX of your product is simple, but you have to follow specific usability methodologies (like eye-tracking).
With the insight that usability tests provide, you can make critical design improvements that will increase conversion rates, sales and boost client satisfaction.
Join us for this friday session to view a live user test & see through the eyes of a user with eye-tracking technology. This session will be animated by Sacha Kocovski and Jean-Louis D'Hondt, ergonomics & usability experts.
Similar to Google Glass, The META and Co. - How to calibrate your Optical See-Through Head Mounted Displays (20)
A Short Introduction to Computer Vision-based Marker TrackingJens Grubert
A Short Introduction to Computer Vision-based Marker Tracking used in Augmented and Virtual Reality applications. Theoretical fundamentals are combined with a publicly available source code
Augmenting the World using Semantic Web TechnologiesJens Grubert
ABSTRACT:
Creating and maintaining scenes for mobile Augmented Reality browsers can be a challenging and time consuming task. The timeliness of digital information artifacts connected to changing urban environments require authors to constantly update the structural representations of augmented scenes or to accept that the information will soon be outdated. We investigated approaches for retrieving multimedia content and relevant web services for mobile Augmented Reality applications at runtime. Using semantic web technologies we are able to postpone the retrieval of actual media items to the moment a user actually perceives an augmented scene. This allows content creators to augment a scene only once and avoid continous manual updates.
Mobile Interactive Hologram VerificationJens Grubert
Our presentation on mobile interactive hologram verification at ISMAR 2013 in Adelaide, Australia
ABSTRACT:
Verification of paper documents is an important part of checking
a person’s identity, authorization for access or simply establishing
a trusted currency. Many documents such as passports or paper
bills include holograms or other view-dependent elements that are
difficult to forge and therefore are used to verify the genuineness
of that document. View-dependent elements change their appearance
based both on the viewing direction as well as dominant light
sources, thus it requires special knowledge and training to accurately
distinguish original elements from forgeries. We present an
interactive application for mobile devices that integrates the recognition
of the documents with the interactive verification of viewdependent
elements. The system recognizes and tracks the paper
document, provides user guidance for view alignment and presents
a stored image of the element’s appearance depending on the current
view of the document also recording user decisions. We describe
how to model and capture the underlying spatially varying
BRDF representation of view-dependent elements. Furthermore,
we evaluate this approach within a user study and demonstrate that
such a setup captures images that are recognizable and that can be
correctly verified.
ACM MobileHCI 2013 - Playing it Real Again: A Repeated Evaluation of Magic Le...Jens Grubert
Jens Grubert delivered the presentation on August 28th, 2013 during the 15th edition of MobileHCI, International Conference on Human-Computer Interaction with Mobile Devices and Services in Munich, Germany.
ABSTRACT:
We repeated a study on the usage of a magic lens and a static peephole interface for playing a find-and-select game in a public space. While we reproduced the study setup and procedure the task was conducted in a public transportation stop with different characteristics. The results on usage duration and user preference were significantly different from those reported for previous conditions. We investigate possible causes, specifically the differences in the spatial characteristics and the social contexts in which the study took place.
ACM NordiCHI 2012: Exploring the Design of Hybrid Interfaces for Jens Grubert
Jens Grubert recently presented "Exploring the Design of Hybrid Interfaces for Augmented Posters in Public Spaces" at the the 7th Nordic Conference on Human-Computer Interaction (NordiCHI 2012). October 14-17, 2012, Copenhagen, Denmark.
Abstract:
The use of Augmented Reality for overlaying visual information on print media like street posters has become widespread over the last few years. While this user interface metaphor represents an instance of cross-media information spaces the specific context of its use has not yet been carefully studied, resulting in productions generally relying on trial-and-error approaches. In this paper, we explicitly consider mobile contexts in the consumption of augmented print media. We explore the design space of hybrid user interfaces for augmented posters and describe different case studies to validate our approach. Outcomes of this work inform the design of future interfaces for publicly accessible augmented print media in mobile contexts.
ACM MobileHCI 2012 - Playing it Real: Magic Lens and Static Peephole Interface…Jens Grubert
Jens Grubert delivered the presentation on September 29th, 2012 during the 14th edition of MobileHCI, International Conference on Human-Computer Interaction with Mobile Devices and Services in San Francisco, California, USA.
ABSTRACT:
Magic lens and static peephole interfaces are used in numerous consumer mobile phone applications such as Augmented Reality browsers, games or digital map applications in a variety of contexts including public spaces. Interface performance has been evaluated for various interaction tasks involving spatial relationships in a scene. However, interface usage outside laboratory conditions has not been considered in depth in the evaluation of these interfaces.
We present findings about the usage of magic lens and static peephole interfaces for playing a find-and-select game in a public space and report on the reactions of the public audience to participants‟ interactions.
Contrary to our expectations participants favored the magic lens over a static peephole interface despite tracking errors, fatigue and potentially conspicuous gestures. Most passersby did not pay attention to the participants and vice versa. A comparative laboratory experiment revealed only few differences in system usage.
ACM MobileHCI 2012 - Playing it Real: Magic Lens and Static Peephole Interface…
Google Glass, The META and Co. - How to calibrate your Optical See-Through Head Mounted Displays
1. Introduction to Optical
See-Through HMD Calibration
Jens Grubert (TU Graz) Yuta Itoh (TU Munich)
jg@jensgrubert.de yuta.itoh@in.tum.de
9th Sep 2014
2. Theory
14:15 Introduction to OST Calibration
15:00 coffee break
15:15 Details of OST Calibration
16:15 coffee break
Practice
16:30 Hands on session: calibration of OST HMDs
17:30 Discussion: experiences, feedback
17:50 wrap-up, mailing list
18:00 end of tutorial
7. The Lack of consistencies
Spatial
Social
Visual Temporal
8. Temporal Inconsist. in OST-HMD
“latencies down to 2.38 ms are required to alleviate
user perception when dragging”
“How fast is fast enough? : a study of the effects of latency in
direct-touch pointing tasks” Jota et al. CH’13
https://www.youtube.com/watch?v=PCbSTj7LjJg
9. Temporal Inconsist. in OST-HMD
Digital Light Processing Projector
“Minimizing Latency for Augmented Reality Displays: Frames
Considered Harmful” Zheng et al. ISMAR’14
11. Visual Consistency
Wide Field of View, etc…
“Pinlight Displays: Wide Field of View Augmented Reality Eyeglasses
using Defocused Point Light Sources” Maimone et al., TOG’14
33. Theory
14:15 Introduction to OST Calibration
15:00 coffee break
15:15 Details of OST Calibration
16:15 coffee break
Practice
16:30 Hands on session: calibration of OST HMDs
17:30 Discussion: experiences, feedback
17:50 wrap-up, mailing list
18:00 end of tutorial
48. How to calibrate stereo systems?
Idea 2:
Calibrate both eyes
simultaneously
Why?
Save time
49. Calibrate both eyes simultaneously
Idea
1. display 2D objects with disparity in left and right
eye
appears as single object at a certain distance
2. Align virtual with physical 3D object
Get point correspondence for both eyes
54. Idea
SPAAM:
align a single point multiple times
Multi-Point Active Alignment (MPAAM):
align several points
concurently but only once
Why?
save time
57. MPAAM Variants
• Align all points
at once
• Minimum of six
points
• Vary spatial
distribution
[TMX07]
58. MPAAM Variants
• Align all points
at once
• Minimum of six
points
• Vary spatial
distribution
• Missing: tradeoff # points - # calibration
steps
[GTM10]
59. Performance
• MPAAM can be conducted significantly
faster than SPAAM
(in average in 84 seconds vs 154 seconds
for SPAAM) [GTM10]
• MPAAM has comparable accuracy in the
calibrated range
60. MPAAM take aways
MPAAM can be alternative to SPAAM if
• Working volume can be covered by
calibration body
• Need for repeated calibration
(e.g., after HMD slips)
67. Evaluation Questions
• How accurate is the overlay given the
current calibration? [MGT01] [GTM10]
• How much do the calibration results vary
between calibrations? [ASO11]
• What is the impact of individual error
sources on the calibration results?
– Head pointing accuracy, body sway,
confirmation methods ... [AXH11]
68. Evaluation Questions
• How accurate is the overlay given the
current calibration? [MGT01] [GTM10]
• How much do the calibration results vary
between calibrations? [ASO11]
• What is the impact of individual error
sources on the calibration results?
– Head pointing accuracy, body sway,
confirmation methods ... [AXH11]
69. How accurate is the overlay given the
current calibration?
Popular Approaches
Use a camera Ask the user
70. User in the Loop Evaluation
Qualitative feedback
„overlay looks good“
Quantitative feedback
71. User in the Loop Evaluation
Qualitative feedback
„overlay looks good“
Quantitative feedback
72. Quantitative Feedback
McGarrity et al. [MGT01]:
• Use a tracked evaluation
board
• Ask AR system to
superimpose object on
푃퐸퐵 = (푥퐸퐵 , 푦퐸퐵)
• Ask user to indicate where she
perceives the object on the
board 푃푈 = (푥푈, 푦푈)
• Offset:Δ푃 = 푃퐸퐵 − 푃푈
73. Quantitative Feedback
McGarrity et al. [MGT01]:
• Use a tracked evaluation
board
• Ask AR system to
superimpose object on
푃퐸퐵 = (푥퐸퐵 , 푦퐸퐵)
• Ask user to indicate where she
perceives the object on the
board 푃푈 = (푥푈, 푦푈)
• Offset:Δ푃 = 푃퐸퐵 − 푃푈
74. Quantitative Feedback
• Drawback of stylus approach:
evaluation only within arm‘s
reach
Alternatives
• Use laser pointer + human
operator instead (beware
pointing accuracy) [GTM10]
• Use projector / large display +
indirect pointing (e.g., mouse)
75. Quantitative Feedback
Benefits:
• Only way to approximate how the user
herself perceives the augmentation
Drawbacks:
• Only valid for current view (distance,
orientation)
• Additional pointing error introduced
76. Take Aways
• Quantitative user feedback only way to
approximate how large the registration
error is for indivdual users
• Feedback methods introduce additional
(pointing) errors
• Make sure to test for all relevant working
distances
83. Motivation
User guided See-Through
Calibration too tedious
Can the calibration
process be shortened?
https://www.flickr.com/photos/stuartncook/4613088809/in/photostream/
84. Observation
We have to estimate 11 parameters
2D
--> At least 6 point correspodences needed
3D
86. Idea
Separate certain parameters which are
independent from the user?
The user would need to collect fewer
point correspondences, making the task
faster and easier.
88. TCS
TCS: Tracking Coordinate System
EDCS: Eye-Display Coordinate System
EDCS
Rotation and Translation between Tracking
Coordinate System and Eye-Display
Coordinate System: 6 Parameters for center
of projection
푡푥, 푡푦 , 푡푧
푟푥, 푟푦 , 푟푧
89. 5 intrinsic parameters of Eye-Display optical
system:
focal length (x,y), shear, principal point (x,y)
(+ more if you want to modell distortion)
90. Separate intrinsic + extrinsic parameters
[OZT04]:
1. Determine ALL parameters
(including distortion) via
camera without user
intervention
2. Update center of projection in
a user phase
92. INDICA: Interaction-free DIsplay CAlibration
Utilizes 3D Eye Localization [IK14]
– Interaction-free, thus do not bother users
–More accurate than a realistic SPAAM setup
93. 3D Eye Position Estimation
1. Estimate a 2D iris ellipse
– Iris detector + Fitting by RANSAC
[SBD12]
2. Back project it to 3D circle
[NNT11]
94. World to HMD(eye) Projection
Manual (SPAAM)
Interaction Free (INDICA Recycle)
Interaction Free (INDICA Full)
3D
2D
95. Summary of INDICA
Calibration of OST-HMDs using
Simple
No user interaction
Accurate
3D eye position
better than Degraded manual calibrations
97. How many control points for
SPAAM?
• Minimum of 6 can lead to unstable and
innaccurate results?
• The more the better? Not neccesarily
16-20 control points sufficient if points are
equally distributed in all three dimensions
99. Calibration Volume
If possible calibrate the
working volume you want to
operate in
Working
Volume
Calibratio
n Volume
100. Quality of Tracking System
Ensure the best calibration possible
for your external tracking system
Ensure a low latency
101. Summary of Part 2
Reducing user errors:
- Data-collection
- Confirmation
- Evaluation
Manual to automatic:
State of the art
Practical tips
102. References 1/2
[AXH11] Axholt, M. (2011). Pinhole Camera Calibration in the Presence of Human Noise.
[ASO11] Axholt, M., Skoglund, M. A., O'Connell, S. D., Cooper, M. D., Ellis, S. R., & Ynnerman, A.
(2011, March). Parameter estimation variance of the single point active alignment method in
optical see-through head mounted display calibration. In Virtual Reality Conference (VR), 2011
IEEE (pp. 27-34). IEEE.
[AZU97] Azuma, R. T. (1997). A survey of augmented reality. Presence, 6(4), 355-385.
[CAR94] Chen, L., Armstrong, C. W., & Raftopoulos, D. D. (1994). An investigation on the
accuracy of three-dimensional space reconstruction using the direct linear transformation
technique. Journal of biomechanics, 27(4), 493-500.
[CNN11] Christian, N., Atsushi, N., & Haruo, T. (2011). Image-based Eye Pose and Reflection
Analysis for Advanced Interaction Techniques and Scene Understanding. CVIM,, 2011(31), 1-16.
[GTM10] Grubert, J., Tuemler, J., Mecke, R., & Schenk, M. (2010). Comparative User Study of two
See-through Calibration Methods. In VR (pp. 269-270).
[GTN02] Genc, Y., Tuceryan, M., & Navab, N. (2002, September). Practical solutions for
calibration of optical see-through devices. In Proceedings of the 1st International Symposium
on Mixed and Augmented Reality (p. 169). IEEE Computer Society.
103. References 2/2
[MAE14] Moser, K. R., Axholt, M., & Edward Swan, J. (2014, March). Baseline SPAAM calibration
accuracy and precision in the absence of human postural sway error. In Virtual Reality (VR), 2014
iEEE (pp. 99-100). IEEE.
[MGT01] McGarrity, E., Genc, Y., Tuceryan, M., Owen, C., & Navab, N. (2001). A new system for
online quantitative evaluation of optical see-through augmentation. In ISAR 2001 (pp. 157-166).
IEEE.
[MDW11] P. Maier, A. Dey, C. A. Waechter, C. Sandor, M. Tönnis and G. Klinker, "An empiric
evaluation of confirmation methods for optical see-through head-mounted display calibration.
In International Symposium on Mixed and Augmented Reality (ISMAR), 2011 IEEE.
[OZT04] Owen, C. B., Zhou, J., Tang, A., & Xiao, F. (2004, November). Display-relative calibration
for optical see-through head-mounted displays. In Mixed and Augmented Reality, 2004. ISMAR
2004. Third IEEE and ACM International Symposium on (pp. 70-78). IEEE.
[SBD12] Świrski, L., Bulling, A., & Dodgson, N. (2012, March). Robust real-time pupil tracking in
highly off-axis images. In Proceedings of the Symposium on Eye Tracking Research and
Applications (pp. 173-176). ACM.
[TU00] Tuceryan, M., & Navab, N. (2000). Single point active alignment method (SPAAM) for
optical see-through HMD calibration for AR. In Augmented Reality, 2000.(ISAR 2000).
Proceedings. IEEE and ACM International Symposium on (pp. 149-158). IEEE.
104. Online References
Up to date references for the field of optical
see-through calibration can be found here:
http://www.mendeley.com/groups/4218141/
calibration-of-optical-see-through-head-mounted-
displays/overview/
104
Editor's Notes
Future Work:
Compare quantitative results of user indicated offsets vs. camera based measurements