Tool Support for Prototyping Interfaces

580 views
496 views

Published on

Vision-based approaches are a promising method for indoor navigation, but prototyping and evaluating them poses several challenges. These include the effort of realizing the localization component, difficulties in simulating real-world behavior and the interaction between vision-based localization and the user interface. In this paper, we report on initial findings from the development of a tool to support this process. We identify key requirements for such a tool and use an example vision- based system to evaluate a first prototype of the tool.

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
580
On SlideShare
0
From Embeds
0
Number of Embeds
3
Actions
Shares
0
Downloads
6
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Tool Support for Prototyping Interfaces

  1. 1. Distributed Multimodal Information Processing GroupProf. Dr. Matthias Kranz Technische Universität München Tool Support for Prototyping Interfaces for Vision-Based Indoor Navigation Andreas Möller1, Christian Kray2, Luis Roalter1, Robert Huitl1, Stefan Diewald1, Matthias Kranz3 1Technische Universität München, Germany 2University of Münster, Germany 3Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Luleå, Sweden 1st Workshop on Mobile Vision and HCI held at MobileHCI 2012, San Francisco
  2. 2. Distributed Multimodal Information Processing GroupProf. Dr. Matthias Kranz Technische Universität MünchenBackground and Motivation•  Location information still the most important contextual information•  Indoor localization is a hot topic and useful for a lot of scenarios: –  Airports –  Hospitals –  Conference venues –  Large environments•  Various indoor localization methods possible: –  WLAN/cell based localization –  Sensor based localization –  Beacon based localization –  Vision based localization21.09.2012 Luis Roalter 2
  3. 3. Distributed Multimodal Information Processing GroupProf. Dr. Matthias Kranz Technische Universität München Why use vision based localization?21.09.2012 Luis Roalter 3
  4. 4. Distributed Multimodal Information Processing GroupProf. Dr. Matthias Kranz Technische Universität MünchenWhy vision based localization?•  Use existing features in the environment•  Modern devices are equipped with a camera and powerful processor•  No additional augmentation needed21.09.2012 Luis Roalter 4
  5. 5. Distributed Multimodal Information Processing GroupProf. Dr. Matthias Kranz Technische Universität München Which challenges will appear?21.09.2012 Luis Roalter 5
  6. 6. Distributed Multimodal Information Processing GroupProf. Dr. Matthias Kranz Technische Universität MünchenChallenges of Vision-Based Localization•  No fixed device needed –  Use your own smartphone without any optional accessories –  Hold the device as you like (landscape, portrait, to the ground, to the wall)•  Quality of visual features –  Repeating furniture in one hallway –  Wall of the same paint•  Orientation of the user’s smartphone –  Features are typically found in eye height (windows, adverts, posters)•  In a typical pose, user holds the device in a way that it points at the floor21.09.2012 Luis Roalter 6
  7. 7. Distributed Multimodal Information Processing GroupProf. Dr. Matthias Kranz Technische Universität MünchenChallenges in HCI•  User interfaces need to support/address those problems: –  Deal with accuracy –  Corrective actions if not enough features are available•  Hard to evaluate (would require already functional localization, but interfaces goes hand in hand with algorithms)•  Little/no experience with developing user interfaces for vision-based systems•  There is a need for adequate prototyping tools!21.09.2012 Luis Roalter 7
  8. 8. Distributed Multimodal Information Processing GroupProf. Dr. Matthias Kranz Technische Universität München Prototyping. What do we need?21.09.2012 Luis Roalter 8
  9. 9. Distributed Multimodal Information Processing GroupProf. Dr. Matthias Kranz Technische Universität MünchenPrototyping Indoor Navigation Interfaces•  Requirements: –  Prototyping in early development –  Ease change of the location (e.g. choose a new environment) –  Evaluation of usability, efficiency and effectivity –  Controlled conditions of localization conditions (e.g. location and orientation accuracy) –  Evaluation of individual features in real-world scenarios21.09.2012 Luis Roalter 9
  10. 10. Distributed Multimodal Information Processing GroupProf. Dr. Matthias Kranz Technische Universität MünchenThe Wizard of Oz Approach Send image directly OR send image ID of local database Local Image DB Trigger UI eventsExperimenter‘s Phone Subject‘s Phone (Server) (Client)21.09.2012 Luis Roalter 10
  11. 11. Distributed Multimodal Information Processing GroupProf. Dr. Matthias Kranz Technische Universität MünchenUser Interface Concept•  Use the camera of the smartphone•  Enable the usage of: –  Virtual Reality (VR) with panorama images (compare to Google StreetView) taken in advance –  Augmented Reality (AR) with a live view of the environment•  In both cases the user needs to hold the camera device towards interesting regions.•  How to motivate the user to hold up the smartphone?21.09.2012 Luis Roalter 11
  12. 12. Distributed Multimodal Information Processing GroupProf. Dr. Matthias Kranz Technische Universität München Augmented Reality •  Display the live camera-image •  Draw information within the image •  Hold the smartphone in eye- height •  Live view of the environment •  Ease match from to the real environment21.09.2012 Luis Roalter 12
  13. 13. Distributed Multimodal Information Processing GroupProf. Dr. Matthias Kranz Technische Universität München Virtual Reality •  Preloaded panorama images •  Similar images to the real environment •  Holding the smartphone to the ground •  No orientation change needed •  User has to match real and virtual reality21.09.2012 Luis Roalter 13
  14. 14. Distributed Multimodal Information Processing GroupProf. Dr. Matthias Kranz Technische Universität MünchenMockup•  Testing the augmentation content21.09.2012 Luis Roalter 14
  15. 15. Distributed Multimodal Information Processing GroupProf. Dr. Matthias Kranz Technische Universität MünchenWhat if the algorithm needs a feature-rich region?•  Motivate the user to look at the phone to follow the instructions (in both AR and VR conditions!)21.09.2012 Luis Roalter 15
  16. 16. Distributed Multimodal Information Processing GroupProf. Dr. Matthias Kranz Technische Universität MünchenTheoretical Evaluation•  Evaluation based on the initial requirements –  Use the prototype at early project stage (even before localization is implemented) –  Testing AR and VR interface concepts on a real smartphone –  Inaccuracy possible due to manual transmission of images –  Different motivation screens to point upon a feature-rich region•  All initial requirements are met.21.09.2012 Luis Roalter 16
  17. 17. Distributed Multimodal Information Processing GroupProf. Dr. Matthias Kranz Technische Universität MünchenPractical Evaluation•  5 subjects evaluating the mockup system•  Predefined route through the building: –  About 100m –  Dataset recorded with LadyBug Camera system –  Corridors and a hall including doors21.09.2012 Luis Roalter 17
  18. 18. Distributed Multimodal Information Processing GroupProf. Dr. Matthias Kranz Technische Universität MünchenFindings of the first evaluation•  AR considered as nice and convenient•  AR Overlay was not always accurate, felt disturbing (in case of orientation error)•  VR had offset in location•  But user can deal better with inaccuracy, matching panoramas to the real environment21.09.2012 Luis Roalter 18
  19. 19. Distributed Multimodal Information Processing GroupProf. Dr. Matthias Kranz Technische Universität MünchenInterface Design and Usability•  Future research questions: –  Which visualization deals better with varying conditions? –  Automatic vs. manual adjustments by the user? –  Preferences for proactive behavior? –  Are VR and AR sufficient for navigation tasks?21.09.2012 Luis Roalter 19
  20. 20. Distributed Multimodal Information Processing GroupProf. Dr. Matthias Kranz Technische Universität MünchenOngoing related work•  Test the application within a large environment (a larger route with multiple turns)•  Use dataset of a complete building recorded in advance (for VR)•  Evaluate the user‘s experience with the application user-interface•  Survey with 30 participants21.09.2012 Luis Roalter 20
  21. 21. Distributed Multimodal Information Processing GroupProf. Dr. Matthias Kranz Technische Universität MünchenCurrent Implementation Subject’s Screen (top) Drawing of images in AR or VR mode, connected to the server device Experimenter’s Screen (right) Send specific images to the client21.09.2012 Luis Roalter 21
  22. 22. Distributed Multimodal Information Processing GroupProf. Dr. Matthias Kranz Technische Universität MünchenPath trough the building•  Including hallways, corridors and different turns•  Evaluate the user’s experience in case of errors•  Current research topic and related user study will be published soon21.09.2012 Luis Roalter 22
  23. 23. Distributed Multimodal Information Processing GroupProf. Dr. Matthias Kranz Technische Universität MünchenSummary•  All initial requirements are met –  Testing of the prototype without having an actual localization –  Different views and augmentations are possible•  A larger study with a longer way is currently evaluated with the same prototype•  Full functional prototype framework available (for public on request)•  Wizard-of-Oz approach can be replaced step by step21.09.2012 Luis Roalter 23
  24. 24. Distributed Multimodal Information Processing GroupProf. Dr. Matthias Kranz Technische Universität MünchenMap21.09.2012 Luis Roalter 24
  25. 25. Distributed Multimodal Information Processing GroupProf. Dr. Matthias Kranz Technische Universität München Thank you for your attention! Questions? ? ? roalter@tum.de www.vmi.ei.tum.de/team/luis-roalter.html21.09.2012 Luis Roalter 25
  26. 26. Distributed Multimodal Information Processing GroupProf. Dr. Matthias Kranz Technische Universität MünchenPaper Reference•  Please find the associated paper at: https://vmi.lmt.ei.tum.de/publications/2012/mobivis_v02_preprint.pdf•  Please cite this work as follows:•  Andreas Möller, Christian Kray, Luis Roalter, Stefan Diewald, Rorbert Huitl, Matthias Kranz. 2012. Tool Support for Prototyping Interfaces for Vision-Based Indoor Navigation In: Workshop on Mobile Vision and HCI (MobiVis) on MobileHCI 2012, San Francisco, USA, September 201221.09.2012 Luis Roalter 26
  27. 27. Distributed Multimodal Information Processing GroupProf. Dr. Matthias Kranz Technische Universität MünchenIf you use BibTex, please use the following entryto cite this work: @INPROCEEDINGS{Mobivis12moeller, author={Andreas M"{o}ller and Christian Kray and Luis Roalter and Stefan Diewald and Robert Huitl and Matthias Kranz}, title={{Tool Support for Prototyping Interfaces for Vision-Based Indoor Navigation}}, booktitle={{Proceedings of the Workshop on Mobile Vision and HCI (MobiVis). Held in Conjunction with Mobile HCI}}, year={2012}, month=sep, location={San Francisco, USA}, abstract={Vision-based approaches are a promising method for indoor navigation, but prototyping and evaluating them poses several challenges. These include the effort of realizing the localization component, difficulties in simulating real-world behavior and the interaction between vision-based localization and the user interface. In this paper, we report on initial findings from the development of a tool to support this process. We identify key requirements for such a tool and use an example vision- based system to evaluate a first prototype of the tool.}, }21.09.2012 Luis Roalter 27

×