We present a novel user interface concept for indoor navigation which uses directional arrows and panorama images at decision points. The interface supports the mental model of landmark-based navigation, can be used on- and offline and is highly tolerant to localization inaccuracy.
Vision-based approaches for mobile indoor localization do not rely on the infrastructure and are therefore scalable and cheap. The particular requirements to a navigation user interface for a vision-based system, however, have not been investigated so far.
Such mobile interfaces should adapt to localization accuracy, which strongly relies on distinctive reference images, and other factors, such as the phone’s pose. If necessary, the system should motivate the user to point at distinctive regions with the smartphone to improve localization quality.
We present a combined interface of Virtual Reality (VR) and Augmented Reality (AR) elements with indicators that help to communicate and ensure localization accuracy. In an evaluation with 81 participants, we found that AR was preferred in case of reliable localization, but with VR, navigation instructions were perceived more accurate in case of localization and orientation errors. The additional indica- tors showed a potential for making users choose distinctive reference images for reliable localization.
This document describes an experimental evaluation of different user interfaces for visual indoor navigation. The study compared augmented reality (AR) to virtual reality (VR) and found that VR was faster and seemed more accurate to users. It also tested a feature indicator and found it increased the number of identifiable features in images. Finally, it evaluated object highlighting and found a soft border version was less distracting than a framed version. The novel user interfaces improved localization accuracy and were more effective and popular than traditional AR interfaces.
Vision-based approaches are a promising method for indoor navigation, but prototyping and evaluating them poses several challenges. These include the effort of realizing the localization component, difficulties in simulating real-world behavior and the interaction between vision-based localization and the user interface. In this paper, we report on initial findings from the development of a tool to support this process. We identify key requirements for such a tool and use an example vision- based system to evaluate a first prototype of the tool.
Location data plus customer behavior equals deeper customer relationships. Urban Airship Insights can gain location data for premium location-based mobile marketing and customer engagement.
Shane Mann Ruff completed the Coursera Mentor Community and Training Course, an online non-credit course authorized by Coursera Community Team and offered through Coursera. The certificate verifies that Shane successfully completed the course, as confirmed by Claire Smith, Community Manager of the Coursera Mentor Program.
Smart Fission is a technology powerhouse offering proprietary full-stack mobile platform for high accuracy indoor navigation, dynamic route guidance and contextual location services
With physical mobile interaction techniques, digital devices can make use of real-world objects in order to interact with them. In this paper, we evaluate and compare state-of-the-art interaction methods in an extensive survey with 149 participants and in a lab study with 16 participants regarding efficiency, utility and usability. Besides radio communication and fiducial markers, we consider visual feature recognition, reflecting the latest technical expertise in object identification. We conceived MobiMed, a medication package identifier implementing four interaction paradigms: pointing, scanning, touching and text search.
We identified both measured and perceived advantages and disadvantages of the individual methods and gained fruitful feedback from participants regarding possible use cases for MobiMed. Touching and scanning were evaluated as fastest in the lab study and ranked first in user satisfaction. The strength of visual search is that objects need not be augmented, opening up physical mobile interaction as demon- strated in MobiMed for further fields of application.
Vision-based approaches for mobile indoor localization do not rely on the infrastructure and are therefore scalable and cheap. The particular requirements to a navigation user interface for a vision-based system, however, have not been investigated so far.
Such mobile interfaces should adapt to localization accuracy, which strongly relies on distinctive reference images, and other factors, such as the phone’s pose. If necessary, the system should motivate the user to point at distinctive regions with the smartphone to improve localization quality.
We present a combined interface of Virtual Reality (VR) and Augmented Reality (AR) elements with indicators that help to communicate and ensure localization accuracy. In an evaluation with 81 participants, we found that AR was preferred in case of reliable localization, but with VR, navigation instructions were perceived more accurate in case of localization and orientation errors. The additional indica- tors showed a potential for making users choose distinctive reference images for reliable localization.
This document describes an experimental evaluation of different user interfaces for visual indoor navigation. The study compared augmented reality (AR) to virtual reality (VR) and found that VR was faster and seemed more accurate to users. It also tested a feature indicator and found it increased the number of identifiable features in images. Finally, it evaluated object highlighting and found a soft border version was less distracting than a framed version. The novel user interfaces improved localization accuracy and were more effective and popular than traditional AR interfaces.
Vision-based approaches are a promising method for indoor navigation, but prototyping and evaluating them poses several challenges. These include the effort of realizing the localization component, difficulties in simulating real-world behavior and the interaction between vision-based localization and the user interface. In this paper, we report on initial findings from the development of a tool to support this process. We identify key requirements for such a tool and use an example vision- based system to evaluate a first prototype of the tool.
Location data plus customer behavior equals deeper customer relationships. Urban Airship Insights can gain location data for premium location-based mobile marketing and customer engagement.
Shane Mann Ruff completed the Coursera Mentor Community and Training Course, an online non-credit course authorized by Coursera Community Team and offered through Coursera. The certificate verifies that Shane successfully completed the course, as confirmed by Claire Smith, Community Manager of the Coursera Mentor Program.
Smart Fission is a technology powerhouse offering proprietary full-stack mobile platform for high accuracy indoor navigation, dynamic route guidance and contextual location services
With physical mobile interaction techniques, digital devices can make use of real-world objects in order to interact with them. In this paper, we evaluate and compare state-of-the-art interaction methods in an extensive survey with 149 participants and in a lab study with 16 participants regarding efficiency, utility and usability. Besides radio communication and fiducial markers, we consider visual feature recognition, reflecting the latest technical expertise in object identification. We conceived MobiMed, a medication package identifier implementing four interaction paradigms: pointing, scanning, touching and text search.
We identified both measured and perceived advantages and disadvantages of the individual methods and gained fruitful feedback from participants regarding possible use cases for MobiMed. Touching and scanning were evaluated as fastest in the lab study and ranked first in user satisfaction. The strength of visual search is that objects need not be augmented, opening up physical mobile interaction as demon- strated in MobiMed for further fields of application.
"There’s An App for That" -- but how do we actually develop them? While smartphones and tablets are even getting increasingly more popular and their application scenarios are growing, we still develop them using only a standard integrated development environment. As context-based services and apps do, next to network connectivity, require lots of sensor data, the tools for providing realistic sensor data during development are still immature.
Developing, testing, debugging and evaluating those next-generation context-based apps require sensor data for the mobile device -- acceleration, motion, light, sound, camera and many more sensors are available. Though, the existing development tools do seriously limit application developers by not providing the data at all or only on a very limited scale. Especially for indoor environment with applications such as indoor navigation, seamless interaction between public and private displays and activity recognition and monitoring, realistic sensor data are needed and simulation support during the development phase is essential.
In this paper, we present our work towards a holistic approach for mobile application development in intelligent environments, leveraging the existing development tool chain, facilitating more effective and realistic means for mobile application development at the example of the Android mobile device platform.
We present a novel approach to use a mobile device for authentication and authorization purposes, where the user is able to authenticate and authorize himself for access on a public terminal. The concept is based on an extension of a Single-Sign On solution for mobile and public terminals.
The document describes DriveAssist, a driver assistance system for Android that uses vehicle-to-everything (V2X) communications. It provides an overview of the motivation and state of art for V2X communications and smartphone integration in vehicles. The system architecture and key functionality of DriveAssist are presented, including receiving safety messages from other vehicles and infrastructure and generating warnings to drivers. Live demos of warning scenarios are shown and future work plans to evaluate the system and improve the user interface are discussed.
In this paper, we present our vision of MobiliNet. MobiliNet is a user-oriented approach for optimising mobility chains with the goal of providing innovative mobility across differ- ent types of mobility providers – from personal short range battery powered mobility aids over different types of pub- lic transports (e.g. buses, short distance train networks) to personal mobility means (e.g. car sharing).
Our vision for MobiliNet is based on a social network- like system that is not limited to human participants. By including vehicles, corporations, parking spaces and other objects and spaces, the system could make traveling more comfortable and less stressful, and finally more efficient for the travellers.
MobiliNet allows high-level trip planning, but also pays attention to other important details of the supported means of transportation. Especially for user groups with special needs, MobiliNet actively supports self-determined mobility. Thus enables again an active participation of this user group in in social life. Besides supporting travellers, the system could also create new business opportunities for transport associations and car sharing corporations.
Visualization, A Primer - Basics, Techniques and GuidelinesCagatay Turkay
Slides for my talk at a workshop at DigitalCatapult in June 2015 on visualization design basics, common techniques and some guidelines. Discuss some of the underlying theory and basic methods, followed by a selection of common vis methods arranged according to data type. Then move on to some guidelines and recommendations whilst designing visualisations. Basically a very high-level 45-min. crash course in visualisation.
Ambient Intelligence: Definitions and Application AreasFulvio Corno
Topics:
- Definition(s)
- Application areas
- Requested features
- Architectures
Slides for the course of "Ambient Intelligence: Technology and Design" given at Politecnico di Torino during year 2013/2014.
Course website: http://bit.ly/polito-ami
In this paper, we describe a novel concept for motivating users to explore applications using natural user interfaces in the automotive domain. Based on prior findings, it can be very hard for users to detect opportunities of action in such systems.
Additionally, traditional “did you know?” hints seem to be ignored by many users today. As a countermeasure, we describe an approach that shall motivate users to explore natural user interfaces by using game elements. By awarding the user with badges and experience level-ups, we hope to create a stronger motivation that is at least maintained until the user is used to the system.
Robust trajectory estimation for crowdsourcing based mobile applicationsNexgen Technology
Ecruitment Solutions (ECS) is one of the leading Delhi based Software Development & HR Consulting Firm, which is assessed at the level of ISO 9001:2008 standard. ECS offers an awesome project and product based solutions to many customers around the globe.
In addition, ECS has also widened its wings by the way consummating academic projects especially for the final year professional degree students in India. ECS consist of a technical team that has solved many IEEE papers and delivered world-class solutions .
Final lecture from the COMP 4010 course on Virtual and Augmented Reality. This lecture was about Research Directions in Augmented Reality. Taught by Mark Billinghurst on November 1st 2016 at the University of South Australia
fddf c cdfvfdgffbvcbfbfbfbfbfgdfbdbdbdbdhfbdfndfbdbdgfbdbd MINI TRAFFIC.pptxrosava2497
This document summarizes a mini project presentation on traffic sign detection using deep learning. A team of 3 students used deep learning methods like sequential models and train-test splitting to divide a dataset for training and testing a sequential model for traffic sign recognition. Their goals were to develop an efficient system for automatically identifying and classifying traffic signs using convolutional neural networks to handle challenges like occlusions and lighting conditions. They conducted a literature survey on past works using techniques like convolutional networks and deep neural networks that achieved state-of-the-art results on traffic sign datasets.
This article proposes an omnisupervised learning framework to improve semantic segmentation of omnidirectional images using multiple data sources. The framework trains an efficient CNN using labeled pinhole images and unlabeled panoramic images. It generates panoramic labels using an ensemble method that considers the wide-angle and wrap-around features of panoramas. This allows the CNN to be trained on automatically generated panoramic data, bypassing costly manual annotation. Experiments show the approach outperforms state-of-the-art methods on challenging omnidirectional datasets, demonstrating improved generalizability of the CNN to unseen panoramic domains.
The document discusses principles for pervasive information architecture and cross-channel user experiences. It defines key terms like cross-channel UX and pervasive information architecture. Cross-channel experiences consider the interconnected ecosystem of devices, users, and information across physical and digital domains. The document outlines five heuristics for cross-channel UX design: place-making, consistency, resilience, reduction, and correlation. It emphasizes the need for designers to adopt a holistic vision that considers the user experience across related physical and digital environments as an interconnected ecosystem.
Air Canvas, often referred to as the Virtual Pen, is a cutting-edge project leveraging the power of OpenCV and computer vision technology. Its primary aim is to transform any ordinary surface into an engaging and interactive sketching area, redefining the way we create digital art and design.
Towards the Integration of Spatiotemporal User-Generated Content and Sensor DataCornelius Rabsch
The document discusses research on integrating spatiotemporal sensor data and user-generated content. It provides an overview of pervasive sensor networks that generate continuous data streams about the environment and how people-centric sensing through mobile devices is creating a new layer of contextual location-based data. The integration of these different data sources could provide increased situational awareness for applications like emergency response management and urban planning. It also presents some challenges around data heterogeneity that semantic technologies may be able to address.
Presentatie ecare summerschool Ghent 2014 An Jacobs
In this presentation we start with explaining the necessity of a user centered approach of any e-care solution. In the past users where only consulted when a product was almost finished at the end of a development trajectory when making changes cost a lot of money. Today another approach is becoming the best practice of R&D: user centered design. In the care domain this brings some extra challenges towards the inclusion of vulnerable people as well as overburdened care professionals. Adapted UCD strategies are thus appropriate. Illustrated with examples from own research experiences in e care R&D projects, we reflect on the essential steps, pitfalls and solutions to integrate a user centered approach in your future eCare project.
The document provides an introduction to computer vision. It discusses the schedule, instructor, report format, grading, and references for the computer vision course. The course will be held weekly on Saturdays starting in July 2023. It will be taught by Dr. Ichsan Ibrahim and include lectures, assignments, a midterm exam, and final exam. Assignment reports must follow a specific format and be submitted digitally. Computer vision applications discussed include optical character recognition, face detection, smile detection, biometric authentication, object recognition in mobile phones and supermarkets, lane monitoring systems, and reconstructing 3D models from images.
International Journal of Image Processing (IJIP) Volume (3) Issue (2)CSCJournals
The document describes a methodology for detecting the location of a virtual passive pointer in 3D space using two cameras and image triangulation. The pointer is modeled and simulated using 3D graphics software to generate images from two camera views. These images are analyzed using a computer program based on triangulation theory to determine the precise coordinates of the pointer in 3D as well as its projection on a display screen. The computational results match well with the known locations specified in the 3D simulation, demonstrating the accuracy of the technique for applications like interactive displays.
User authentication on publicly exposed terminals with established mechanisms, such as typing the credentials on a virtual keyboard, can be insecure e.g. due to shoulder surfing or due to a hacked terminal. In addition, username and password entry can be time-consuming and thus improvable with relation to usability. As security and comfort are often competing with each other, novel authentication and authorization methods especially for public terminals are desirable. In this paper, we present an approach on a distributed authentication and authorization system, where the user can be easily identified and enabled to use a service with his smartphone. The smartphone (as personal and private device the user is always in control of) can provide a highly secure authentication token that is renewed and ex- changed in the background without the user’s participation. The claimed improvements were supported by a user sur- vey with an implementation of a digital room management system as an example for a public display. The proposed au- thentication procedure would increase security and yet enable fast authentication within publicly exposed terminals.
Welche Lehrmethode eignet sich wann – und wofür – besonders gut? Als Antwort auf diese Frage können Hochschullehrende auf die Android-App „MobiDics“ zu- rückgreifen und dabei ihr Wissen über Lehrmethoden systematisch erweitern. Im Workshop geht es zum einen darum, MobiDics als Element hochschuldidaktischer Qualifizierung kennenzulernen. Zum anderen werden die Entstehung sowie die In- tegration dieser Anwendung in hochschuldidaktischen Angeboten thematisiert.
More Related Content
Similar to Decision-Point Panorama-Based Indoor Navigation
"There’s An App for That" -- but how do we actually develop them? While smartphones and tablets are even getting increasingly more popular and their application scenarios are growing, we still develop them using only a standard integrated development environment. As context-based services and apps do, next to network connectivity, require lots of sensor data, the tools for providing realistic sensor data during development are still immature.
Developing, testing, debugging and evaluating those next-generation context-based apps require sensor data for the mobile device -- acceleration, motion, light, sound, camera and many more sensors are available. Though, the existing development tools do seriously limit application developers by not providing the data at all or only on a very limited scale. Especially for indoor environment with applications such as indoor navigation, seamless interaction between public and private displays and activity recognition and monitoring, realistic sensor data are needed and simulation support during the development phase is essential.
In this paper, we present our work towards a holistic approach for mobile application development in intelligent environments, leveraging the existing development tool chain, facilitating more effective and realistic means for mobile application development at the example of the Android mobile device platform.
We present a novel approach to use a mobile device for authentication and authorization purposes, where the user is able to authenticate and authorize himself for access on a public terminal. The concept is based on an extension of a Single-Sign On solution for mobile and public terminals.
The document describes DriveAssist, a driver assistance system for Android that uses vehicle-to-everything (V2X) communications. It provides an overview of the motivation and state of art for V2X communications and smartphone integration in vehicles. The system architecture and key functionality of DriveAssist are presented, including receiving safety messages from other vehicles and infrastructure and generating warnings to drivers. Live demos of warning scenarios are shown and future work plans to evaluate the system and improve the user interface are discussed.
In this paper, we present our vision of MobiliNet. MobiliNet is a user-oriented approach for optimising mobility chains with the goal of providing innovative mobility across differ- ent types of mobility providers – from personal short range battery powered mobility aids over different types of pub- lic transports (e.g. buses, short distance train networks) to personal mobility means (e.g. car sharing).
Our vision for MobiliNet is based on a social network- like system that is not limited to human participants. By including vehicles, corporations, parking spaces and other objects and spaces, the system could make traveling more comfortable and less stressful, and finally more efficient for the travellers.
MobiliNet allows high-level trip planning, but also pays attention to other important details of the supported means of transportation. Especially for user groups with special needs, MobiliNet actively supports self-determined mobility. Thus enables again an active participation of this user group in in social life. Besides supporting travellers, the system could also create new business opportunities for transport associations and car sharing corporations.
Visualization, A Primer - Basics, Techniques and GuidelinesCagatay Turkay
Slides for my talk at a workshop at DigitalCatapult in June 2015 on visualization design basics, common techniques and some guidelines. Discuss some of the underlying theory and basic methods, followed by a selection of common vis methods arranged according to data type. Then move on to some guidelines and recommendations whilst designing visualisations. Basically a very high-level 45-min. crash course in visualisation.
Ambient Intelligence: Definitions and Application AreasFulvio Corno
Topics:
- Definition(s)
- Application areas
- Requested features
- Architectures
Slides for the course of "Ambient Intelligence: Technology and Design" given at Politecnico di Torino during year 2013/2014.
Course website: http://bit.ly/polito-ami
In this paper, we describe a novel concept for motivating users to explore applications using natural user interfaces in the automotive domain. Based on prior findings, it can be very hard for users to detect opportunities of action in such systems.
Additionally, traditional “did you know?” hints seem to be ignored by many users today. As a countermeasure, we describe an approach that shall motivate users to explore natural user interfaces by using game elements. By awarding the user with badges and experience level-ups, we hope to create a stronger motivation that is at least maintained until the user is used to the system.
Robust trajectory estimation for crowdsourcing based mobile applicationsNexgen Technology
Ecruitment Solutions (ECS) is one of the leading Delhi based Software Development & HR Consulting Firm, which is assessed at the level of ISO 9001:2008 standard. ECS offers an awesome project and product based solutions to many customers around the globe.
In addition, ECS has also widened its wings by the way consummating academic projects especially for the final year professional degree students in India. ECS consist of a technical team that has solved many IEEE papers and delivered world-class solutions .
Final lecture from the COMP 4010 course on Virtual and Augmented Reality. This lecture was about Research Directions in Augmented Reality. Taught by Mark Billinghurst on November 1st 2016 at the University of South Australia
fddf c cdfvfdgffbvcbfbfbfbfbfgdfbdbdbdbdhfbdfndfbdbdgfbdbd MINI TRAFFIC.pptxrosava2497
This document summarizes a mini project presentation on traffic sign detection using deep learning. A team of 3 students used deep learning methods like sequential models and train-test splitting to divide a dataset for training and testing a sequential model for traffic sign recognition. Their goals were to develop an efficient system for automatically identifying and classifying traffic signs using convolutional neural networks to handle challenges like occlusions and lighting conditions. They conducted a literature survey on past works using techniques like convolutional networks and deep neural networks that achieved state-of-the-art results on traffic sign datasets.
This article proposes an omnisupervised learning framework to improve semantic segmentation of omnidirectional images using multiple data sources. The framework trains an efficient CNN using labeled pinhole images and unlabeled panoramic images. It generates panoramic labels using an ensemble method that considers the wide-angle and wrap-around features of panoramas. This allows the CNN to be trained on automatically generated panoramic data, bypassing costly manual annotation. Experiments show the approach outperforms state-of-the-art methods on challenging omnidirectional datasets, demonstrating improved generalizability of the CNN to unseen panoramic domains.
The document discusses principles for pervasive information architecture and cross-channel user experiences. It defines key terms like cross-channel UX and pervasive information architecture. Cross-channel experiences consider the interconnected ecosystem of devices, users, and information across physical and digital domains. The document outlines five heuristics for cross-channel UX design: place-making, consistency, resilience, reduction, and correlation. It emphasizes the need for designers to adopt a holistic vision that considers the user experience across related physical and digital environments as an interconnected ecosystem.
Air Canvas, often referred to as the Virtual Pen, is a cutting-edge project leveraging the power of OpenCV and computer vision technology. Its primary aim is to transform any ordinary surface into an engaging and interactive sketching area, redefining the way we create digital art and design.
Towards the Integration of Spatiotemporal User-Generated Content and Sensor DataCornelius Rabsch
The document discusses research on integrating spatiotemporal sensor data and user-generated content. It provides an overview of pervasive sensor networks that generate continuous data streams about the environment and how people-centric sensing through mobile devices is creating a new layer of contextual location-based data. The integration of these different data sources could provide increased situational awareness for applications like emergency response management and urban planning. It also presents some challenges around data heterogeneity that semantic technologies may be able to address.
Presentatie ecare summerschool Ghent 2014 An Jacobs
In this presentation we start with explaining the necessity of a user centered approach of any e-care solution. In the past users where only consulted when a product was almost finished at the end of a development trajectory when making changes cost a lot of money. Today another approach is becoming the best practice of R&D: user centered design. In the care domain this brings some extra challenges towards the inclusion of vulnerable people as well as overburdened care professionals. Adapted UCD strategies are thus appropriate. Illustrated with examples from own research experiences in e care R&D projects, we reflect on the essential steps, pitfalls and solutions to integrate a user centered approach in your future eCare project.
The document provides an introduction to computer vision. It discusses the schedule, instructor, report format, grading, and references for the computer vision course. The course will be held weekly on Saturdays starting in July 2023. It will be taught by Dr. Ichsan Ibrahim and include lectures, assignments, a midterm exam, and final exam. Assignment reports must follow a specific format and be submitted digitally. Computer vision applications discussed include optical character recognition, face detection, smile detection, biometric authentication, object recognition in mobile phones and supermarkets, lane monitoring systems, and reconstructing 3D models from images.
International Journal of Image Processing (IJIP) Volume (3) Issue (2)CSCJournals
The document describes a methodology for detecting the location of a virtual passive pointer in 3D space using two cameras and image triangulation. The pointer is modeled and simulated using 3D graphics software to generate images from two camera views. These images are analyzed using a computer program based on triangulation theory to determine the precise coordinates of the pointer in 3D as well as its projection on a display screen. The computational results match well with the known locations specified in the 3D simulation, demonstrating the accuracy of the technique for applications like interactive displays.
Similar to Decision-Point Panorama-Based Indoor Navigation (20)
User authentication on publicly exposed terminals with established mechanisms, such as typing the credentials on a virtual keyboard, can be insecure e.g. due to shoulder surfing or due to a hacked terminal. In addition, username and password entry can be time-consuming and thus improvable with relation to usability. As security and comfort are often competing with each other, novel authentication and authorization methods especially for public terminals are desirable. In this paper, we present an approach on a distributed authentication and authorization system, where the user can be easily identified and enabled to use a service with his smartphone. The smartphone (as personal and private device the user is always in control of) can provide a highly secure authentication token that is renewed and ex- changed in the background without the user’s participation. The claimed improvements were supported by a user sur- vey with an implementation of a digital room management system as an example for a public display. The proposed au- thentication procedure would increase security and yet enable fast authentication within publicly exposed terminals.
Welche Lehrmethode eignet sich wann – und wofür – besonders gut? Als Antwort auf diese Frage können Hochschullehrende auf die Android-App „MobiDics“ zu- rückgreifen und dabei ihr Wissen über Lehrmethoden systematisch erweitern. Im Workshop geht es zum einen darum, MobiDics als Element hochschuldidaktischer Qualifizierung kennenzulernen. Zum anderen werden die Entstehung sowie die In- tegration dieser Anwendung in hochschuldidaktischen Angeboten thematisiert.
Self-reporting techniques, such as data logging or a diary, are frequently used in long-term studies, but prone to subjects’ forgetfulness and other sources of inaccuracy. We conducted a six-week self-reporting study on smartphone usage in or- der to investigate the accuracy of self-reported information, and used logged data as ground truth to compare the sub- jects’ reports against. Subjects never recorded more than 70% and, depending on the requested reporting interval, down to less than 40% of actual app usages. They significantly over- estimated how long they used apps. While subjects forgot self-reports when no automatic reminders were sent, a high reporting frequency was perceived as uncomfortable and bur- densome. Most significantly, self-reporting even changed the actual app usage of users and hence can lead to deceptive measures if a study relies on no other data sources.
With this contribution, we provide empirical quantitative long-term data on the reliability of self-reported data col- lected with mobile devices. We aim to make researchers aware of the caveats of self-reporting and give recommenda- tions for maximizing the reliability of results when conduct- ing large-scale, long-term app usage studies.
We present GymSkill, a personal trainer for ubiquitous monitoring and assessment of physical activity using standard fitness equipment. The system records and analyzes exercises using the sensors of a personal smartphone attached to the gym equipment. Novel fine-grained activity recognition techniques based on pyramidal Principal Component Breakdown Analysis (PCBA) provide a quantitative analysis of the quality of human movements. In addition to overall quality judgments, GymSkill identifies interesting portions of the recorded sensor data and provides
suggestions for improving the individual performance, thereby extending existing work. The system was evaluated in a case study where 6 participants performed a variety of exercises on balance boards. GymSkill successfully assessed the quality of the exercises, in agreement with the
professional judgment provided by a physician. User feedback suggests that GymSkill has the potential to serve as an effective tool for motivating
and supporting lay people to overcome sedentary, unhealthy lifestyles. GymSkill is available in the Android Market as "VMI Fit"
The demographic change in the world’s population raises new problems for healthcare, such as rising costs for caretaking on elderly people. Ambient Assisted Living (AAL) aims at assisting elderly people through technical equipment to manage their daily tasks in their own homes. One of the important approaches is to monitoring vital parameters without actually sending nursing staff to the person in need of care. Additionally, by including motivational factors (e.g. sports and fitness programs), the person’s health state can be influenced. In this paper, we present a survey within a group of elderly people aged between 40 and 70 years, which is representative for the end-user group the GewoS Chair is designed for. Furthermore, we will discuss the elderly people’s behavior when dealing with new technologies and systems improving further attempts on this target group.
Luis Roalter presents challenges and possibilities for distributed networks within the Robot Operating System (ROS). Some challenges include integrating robots and sensors into large intelligent environments, transitioning to IPv6, and addressing security concerns in large distributed networks. Possible solutions involve adopting IPv6 to allow identification of devices, enabling multiple master nodes to improve reliability, and implementing routing between networks with synchronization nodes. The goal is to allow ROS to scale to large, complex distributed systems like the "Internet of Things."
Digital market places (e.g. Apple App Store, Google Play) have become the dominant platforms for the distribution of software for mobile phones. Thereby, developers can reach millions of users. However, neither of these market places today has mechanisms in place to enforce security critical updates of distributed apps. This paper investigates this problem by gaining insights on the correlation between published updates and actual installations of those. Our findings show that almost half of all users would use a vulnerable app version even 7 days after the fix has been published. We discuss our results and give initial recommendations to app developers.
We report on MobiDics, a mobile learning platform for professors, lecturers and tutors. In a survey with 100+ participants, we revealed that young, inexperienced teaching personnel at universities rarely use specific didactic methods to plan and structure courses. Such methods play an important role in learning processes since they, for example, activate students and contribute to more profound and sustainable learning experiences. Based on learning phases and social forms, MobiDics is able to suggest didactic methods that are adequate to a specific teaching situation. Parameters such as class size, teaching tool support, room constraints, etc. can additionally be incorporated. Learning settings can thereby be formalized and reconstructed based on the building blocks in form of didactic methods.
MobiDics encourages and supports the targeted use of didactic concepts with the long-term goal of increasing the quality of university education. A particular focus lies on cooperative learning through community-based features. Users report on their experiences how well certain methods worked by a commenting function, and exchange tips and feedback with peers and experts. While user-generated content can comfortably be added through the web frontend, a mobile application allows dynamic adaption of didactic planning to contextual conditions such as the current lecture hall.
In a two-step evaluation, MobiDics was adopted positively in the target group and its features highly appreciated. Our results motivate a further long-term study where we will evaluate MobiDics in the field.
More from Distributed Multimodal Information Processing Group (8)
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
2. Institute for Media Technology
Distributed Multimodal Information Processing Group Technische Universität München
Overview
Motivation: Vision-Based Localization
UI Concept and First Evaluation
Decision-Point Based Navigation
User Study
Discussion and Future Work
15.2.2013 A. Möller, M. Kranz, L. Roalter, S. Diewald, K. Synnes
3. Institute for Media Technology
Distributed Multimodal Information Processing Group Technische Universität München
Background and Motivation
§ Location information still the
most important contextual information
§ Indoor localization is a hot topic
and useful for a lot of locations:
□ Airports
□ Hospitals
□ Conference venues
□ Large environments
§ Various indoor localization methods
possible:
□ WLAN/cell-based localization
□ Sensor-based localization
□ Beacon-based localization
□ Vision-based localization
15.2.2013 A. Möller, M. Kranz, L. Roalter, S. Diewald, K. Synnes
4. Institute for Media Technology
Distributed Multimodal Information Processing Group Technische Universität München
Vision-Based Localization Live features
§ Principle
□ Comparing of query images, taken with
camera, to reference images based on
visual features
§ Advantages
□ No infrastructure and augmentation of the environment needed
□ Immune to disturbances (reflection/refraction…)
□ Works with existing
(modern) hardware
15.2.2013 A. Möller, M. Kranz, L. Roalter, S. Diewald, K. Synnes
5. Institute for Media Technology
Distributed Multimodal Information Processing Group Technische Universität München
Overview
Motivation: Vision-Based Localization
UI Concept and First Evaluation
Decision-Point Based Navigation
User Study
Discussion and Future Work
15.2.2013 A. Möller, M. Kranz, L. Roalter, S. Diewald, K. Synnes
6. Institute for Media Technology
Distributed Multimodal Information Processing Group Technische Universität München
HCI Perspective: Interaction Concept
§ Augmented Reality View for intuitiveness
□ But: Consequence when location estimate
is inaccurate:
wrong overlays!
?
§ Permanent re-localization during path required
□ High computational effort
□ Uncomfortable camera pose
□ Not all scenes adequate for visual localization
(sometimes insufficient features for algorithm)
15.2.2013 A. Möller, M. Kranz, L. Roalter, S. Diewald, K. Synnes
7. Institute for Media Technology
Distributed Multimodal Information Processing Group Technische Universität München
Interaction Concept (cont.)
Instead:
§ Re-localization from time to time
(when possible)
□ Estimation of intermediate position
with odometry (device sensors)
□ More comfortable usage
§ “Virtual Reality” View
□ Regularly show 360° panorama images
of the environment with embedded navigation
instructions (those panoramas are already available
from reference images)
□ More robust: user matches virtual and real world
15.2.2013 A. Möller, M. Kranz, L. Roalter, S. Diewald, K. Synnes
8. Institute for Media Technology
Distributed Multimodal Information Processing Group Technische Universität München
Initial User Study
§ Comparison of
Augmented Reality and
Virtual Reality
§ Simulation of different levels
of accuracy
§ Survey of user preferences
See Full Paper:
A. Möller, M. Kranz, R. Huitl, S. Diewald, L. Roalter
A Mobile Indoor Navigation System Interface
Adapted to Vision-Based Localization
In: 11th Intl. Conf. on Mobile and
Ubiquitous Multimedia (MUM 2012)
15.2.2013 A. Möller, M. Kranz, L. Roalter, S. Diewald, K. Synnes
9. Institute for Media Technology
Distributed Multimodal Information Processing Group Technische Universität München
Initial User Study: Summary
§ Panoramas provide better guidance when location estimate is
inaccurate than Augmented Reality
§ But: Subjects experienced frequent changes (~2m) as irritating, as
□ Shown locations are not always correct
□ Locations sometimes do not look like in reality
□ Frequent changes in the UI are visually straining
§ Subjects manage finding their goal, also when not looking at the
screen all the time
§ How to improve?
□ Reduce frequent visual changes
□ Simplify match between panoramas and real world (problem of
misplaced panoramas?)
15.2.2013 A. Möller, M. Kranz, L. Roalter, S. Diewald, K. Synnes
10. Institute for Media Technology
Distributed Multimodal Information Processing Group Technische Universität München
Overview
Motivation: Vision-Based Localization
UI Concept and First Evaluation
Decision-Point Based Navigation
User Study
Discussion and Future Work
15.2.2013 A. Möller, M. Kranz, L. Roalter, S. Diewald, K. Synnes
11. Institute for Media Technology
Distributed Multimodal Information Processing Group Technische Universität München
Decision-Point Based Navigation (DPBN)
§ Show only panoramas of decision points
§ Compliance with familiar mental model of self-orientation
§ Users can always re-localize and retrieve the panorama
of their current location
§ High error tolerance
§ Offline usage possible (by swiping through list of route instructions)
15.2.2013 A. Möller, M. Kranz, L. Roalter, S. Diewald, K. Synnes
12. Institute for Media Technology
Distributed Multimodal Information Processing Group Technische Universität München
Decision-Point Based Navigation (DPBN)
§ Show only panoramas of decision points
§ Compliance with familiar mental model of self-orientation
§ Users can always re-localize and retrieve the panorama
of their current location
§ High error tolerance
§ Offline usage possible
15.2.2013 A. Möller, M. Kranz, L. Roalter, S. Diewald, K. Synnes
13. Institute for Media Technology
Distributed Multimodal Information Processing Group Technische Universität München
Decision-Point Based Navigation (DPBN)
§ Show only panoramas of decision points
§ Compliance with familiar mental model of self-orientation
§ Users can always re-localize and retrieve the panorama
of their current location
§ High error tolerance
§ Offline usage possible
15.2.2013 A. Möller, M. Kranz, L. Roalter, S. Diewald, K. Synnes
14. Institute for Media Technology
Distributed Multimodal Information Processing Group Technische Universität München
Decision-Point Based Navigation (DPBN)
§ Show only panoramas of decision points
§ Compliance with familiar mental model of self-orientation
§ Users can always re-localize and retrieve the panorama
of their current location
§ High error tolerance
§ Offline usage possible
15.2.2013 A. Möller, M. Kranz, L. Roalter, S. Diewald, K. Synnes
15. Institute for Media Technology
Distributed Multimodal Information Processing Group Technische Universität München
Decision-Point Based Navigation (DPBN)
§ Show only panoramas of decision points
§ Compliance with familiar mental model of self-orientation
§ Users can always re-localize and retrieve the panorama
of their current location
§ High error tolerance
§ Offline usage possible
15.2.2013 A. Möller, M. Kranz, L. Roalter, S. Diewald, K. Synnes
16. Institute for Media Technology
Distributed Multimodal Information Processing Group Technische Universität München
Decision-Point Based Navigation (DPBN)
§ Show only panoramas of decision points
§ Compliance with familiar mental model of self-orientation
§ Users can always re-localize and retrieve the panorama
of their current location
§ High error tolerance
§ Offline usage possible
15.2.2013 A. Möller, M. Kranz, L. Roalter, S. Diewald, K. Synnes
17. Institute for Media Technology
Distributed Multimodal Information Processing Group Technische Universität München
Decision-Point Based Navigation (DPBN)
§ Show only panoramas of decision points
§ Compliance with familiar mental model of self-orientation
§ Users can always re-localize and retrieve the panorama
of their current location
§ High error tolerance
§ Offline usage possible
15.2.2013 A. Möller, M. Kranz, L. Roalter, S. Diewald, K. Synnes
18. Institute for Media Technology
Distributed Multimodal Information Processing Group Technische Universität München
Overview
Motivation: Vision-Based Localization
UI Concept and First Evaluation
Decision-Point Based Navigation
User Study
Discussion and Future Work
15.2.2013 A. Möller, M. Kranz, L. Roalter, S. Diewald, K. Synnes
19. Institute for Media Technology
Distributed Multimodal Information Processing Group Technische Universität München
User Study: Research Questions
§ RQ1: Does DPBN have an effect on efficiency?
Are users as fast as with continuous panoramas?
§ RQ2: Is DPBN more convenient than continuous panoramas?
What mode do users prefer? How well do they feel guided?
§ RQ3: What usage patterns can be identified?
What do we learn for designing an “ideal” route description?
15.2.2013 A. Möller, M. Kranz, L. Roalter, S. Diewald, K. Synnes
20. Institute for Media Technology
Distributed Multimodal Information Processing Group Technische Universität München
User Study – Quick Facts
3 conditions
- Continuous panoramas (automatic *) * Wizard of Oz
- Decision points (automatic *)
- Decision points (manual navigation)
3 routes (avoidance of learning effects)
Measured data: time, user behavior, questionnaires
12 Participants, 75% male, 25% female, average 25 years
Within-subjects design
75% own a smartphone, only one has experience with
indoor navigation
Source: iconsdb.com
15.2.2013 A. Möller, M. Kranz, L. Roalter, S. Diewald, K. Synnes
21. Institute for Media Technology
Distributed Multimodal Information Processing Group Technische Universität München
Results: RQ1
Average Time per Mode
350
300
Time in seconds
250
200
Difference
150 not
significant 263
100 196 208
50
0
Continuous Decision-Point Decision-Point
(automatic) (automatic) (manual)
15.2.2013 A. Möller, M. Kranz, L. Roalter, S. Diewald, K. Synnes
22. Institute for Media Technology
Distributed Multimodal Information Processing Group Technische Universität München
Results: RQ2
I found the method pleasing to use.
5: fully agree
1: fully disagree
- Median
+ Mean
Continuous Decision-Point Decision-Point
(automatic) (automatic) (manual)
15.2.2013 A. Möller, M. Kranz, L. Roalter, S. Diewald, K. Synnes
23. Institute for Media Technology
Distributed Multimodal Information Processing Group Technische Universität München
Results: RQ2
I felt guided well to the goal.
5: fully agree
1: fully disagree
- Median
+ Mean
Continuous Decision-Point Decision-Point
(automatic) (automatic) (manual)
15.2.2013 A. Möller, M. Kranz, L. Roalter, S. Diewald, K. Synnes
24. Institute for Media Technology
Distributed Multimodal Information Processing Group Technische Universität München
Results: RQ2
Panoramas of decision
points are sufficient for Which system would you choose?
orientating
8%
Continuous
17% (automatic)
Decision-Point
(automatic)
Decision-Point
75% (manual)
5: fully agree
1: fully disagree
15.2.2013 A. Möller, M. Kranz, L. Roalter, S. Diewald, K. Synnes
25. Institute for Media Technology
Distributed Multimodal Information Processing Group Technische Universität München
Results: RQ3
Participant 12 - Path 1
40
30
Distance (m)
20
10
0
20 40 60 80 100 120 140 160 180 200 220 240 260 280 300 320 340
Time (s)
swipe relocalize Δd position
15.2.2013 A. Möller, M. Kranz, L. Roalter, S. Diewald, K. Synnes
26. Institute for Media Technology
Distributed Multimodal Information Processing Group Technische Universität München
Results: RQ3
Participant 2 - Path 2
60
50
40
30
20
10
Distance (m)
0
20 40 60 80 100 120 140 160 180 200 220 240 260 280 300 320 340
-10
-20
-30
-40
-50
-60
Time (s)
swipe relocalize Δd position
15.2.2013 A. Möller, M. Kranz, L. Roalter, S. Diewald, K. Synnes
27. Institute for Media Technology
Distributed Multimodal Information Processing Group Technische Universität München
Results: RQ3
Participant 5 - Path 3
30
20
Distance (m)
10
0
20 40 60 80 100 120 140 160 180 200 220
-10
-20
Time (s)
swipe relocalize Δd position
15.2.2013 A. Möller, M. Kranz, L. Roalter, S. Diewald, K. Synnes
28. Institute for Media Technology
Distributed Multimodal Information Processing Group Technische Universität München
Results: RQ3 – Manual Mode
§ Average numbers
□ 52.5 continuous locations (automatic, continuous mode)
□ 10.3 decision points (automatic, decision-point mode)
□ 15.3 manual swipes
□ 9.3 times relocalized
§ Different usage patterns/strategies
□ Showing “ahead” panoramas, no use of automatic re-localization
□ Frequent use of re-localization, (sometimes only in parts of the
route) – equivalent to continuous panorama mode
□ “Memorizing” of the entire route not observed, subjects only
looked at the “next” panorama location
15.2.2013 A. Möller, M. Kranz, L. Roalter, S. Diewald, K. Synnes
29. Institute for Media Technology
Distributed Multimodal Information Processing Group Technische Universität München
Overview
Motivation: Vision-Based Localization
UI Concept and First Evaluation
Decision-Point Based Navigation
User Study
Discussion and Future Work
15.2.2013 A. Möller, M. Kranz, L. Roalter, S. Diewald, K. Synnes
30. Institute for Media Technology
Distributed Multimodal Information Processing Group Technische Universität München
Discussion
§ Subjects were equally fast with DPBN as
with continuous panoramas (RQ1)
§ Subjects felt guided better with continuous panoramas (RQ2)
□ Due to more certainty whether they are on the right way
□ Confirmed by frequent swipes and re-localizations in manual mode
§ DPBN is desirable from a technical point of view (reliability…), but
acceptance must be increased
§ Requires stronger confirmation whether subjects are (still) on the right
way
15.2.2013 A. Möller, M. Kranz, L. Roalter, S. Diewald, K. Synnes
31. Institute for Media Technology
Distributed Multimodal Information Processing Group Technische Universität München
How could this be achieved?
(Lessons learned)
§ Compromise between continuous mode and DPBN as it is now
§ Distance information
§ Verification when decision point was passed
§ Add intermediate landmarks in decision point mode
□ Especially if decision point lies too far away (distance threshold)
§ Simplify the mapping of picture content to real environment
□ Highlight significant objects in panorama
15.2.2013 A. Möller, M. Kranz, L. Roalter, S. Diewald, K. Synnes
32. Institute for Media Technology
Distributed Multimodal Information Processing Group Technische Universität München
Summary and Outlook
§ With Decision-Point Based Navigation, we have presented
a novel interface for visual indoor navigation
□ Very robust in case of localization inaccuracy
□ Works even if localization fails (manual mode)
□ Supports users’ mental model of navigation (landmarks)
§ Real-world study results
□ DPBN as efficient as full panorama navigation
□ DPBN needs more information to reduce users’ uncertainty
(landmarks, confirmations) to increase satisfaction
§ Future work
□ See previous slide ;-)
□ Identification of good candidates for (additional) key points
15.2.2013 A. Möller, M. Kranz, L. Roalter, S. Diewald, K. Synnes
33. Institute for Media Technology
Distributed Multimodal Information Processing Group Technische Universität München
Thank you for your attention!
Questions?
?
?
andreas.moeller@tum.de
www.vmi.ei.tum.de/team/andreas-moeller.html
This research project has been supported by the space agency of the German Aerospace Center with
funds from the Federal Ministry of Economics and Technology on the basis of a resolution of the German
Bundestag under the reference 50NA1107.
15.2.2013 A. Möller, M. Kranz, L. Roalter, S. Diewald, K. Synnes
34. Institute for Media Technology
Distributed Multimodal Information Processing Group Technische Universität München
Paper Reference
§ Please find the associated paper at:
https://vmi.lmt.ei.tum.de/publications/2013/MCPT2013-
IndoorNav_preprint.pdf
§ Please cite this work as follows:
§ A. Möller, M. Kranz, L. Roalter, S. Diewald, Kåre Synnes
Decision-Point Panorama-Based Indoor Navigation
In: 14th International Conference on Computer Aided Systems
Theory (EUROCAST 2013), pp. 325-326, Las Palmas de Gran
Canaria, Spain, February 2013
15.2.2013 A. Möller, M. Kranz, L. Roalter, S. Diewald, K. Synnes
35. Institute for Media Technology
Distributed Multimodal Information Processing Group Technische Universität München
If you use BibTex, please use the following
entry to cite this work:
@INPROCEEDINGS{MCPT13IndoorNav,
author = {Andreas M{"o}ller and Matthias Kranz and Luis Roalter and Stefan Diewald},
title = {{Decision-Point Panorama-Based Indoor Navigation}},
booktitle = {14th International Conference on Computer Aided Systems Theory (EUROCAST 2013)},
editor = {Alexis Quesada-Arencibia and Jos'{e} Carlos Rodriguez and Roberto Moreno-Diaz jr. and Roberto Moreno-Diaz},
year = {2013},
month = feb,
pages = {325--326},
ISBN = {978-84-695-6971-9},
location = {Las Palmas de Gran Canaria, Spain},
}
15.2.2013 A. Möller, M. Kranz, L. Roalter, S. Diewald, K. Synnes