Indoor navigation systems for users who are visually impaired typically rely upon expensive physical augmentation of the environment or expensive sensing equipment; consequently few systems have been implemented. We present an indoor navigation system called Navatar that allows for localization and navigation by exploiting the physical char- acteristics of indoor environments, taking advantage of the unique sensing abilities of users with visual impairments, and minimalistic sensing achievable with low cost accelerometers available in smartphones. Particle filters are used to estimate the user’s location based on the accelerometer data as well as the user confirming the presence of anticipated tactile land- marks along the provided path. Navatar has a high possibility of large-scale deployment, as it only requires an annotated virtual representation of an indoor environment. A user study with six blind users determines the accuracy of the approach, collects qualitative experiences and identifies areas for improvement.
Textsl: a screen reader accessible virtual world client for second lifeEelke Folmer
Virtual worlds are not accessible to users who are visually impaired as they lack any textual representation that can be read with a screen reader. We developed an interface modeled after text based adventure games like zork that allows a screen reader user to iteratively interact with the popular virtual world of second life.
DOMINGUES, Diana; HAMDAN, Camila; AUGUSTO, Leci.Biocybrid Body and Rituals in...Camila Hamdan
Congreso Internacional Mujer, Arte y Tecnologia en la Nueva Esfera Pública-CIMUAT, Valencia, 3-4 noviembre, 2010
DOMINGUES, Diana; HAMDAN, Camila; AUGUSTO, Leci. Cuerpo Biocibrido y Rituales en la Vida Urbana Mezclada. In: Congreso Internacional Mujer, Arte y Tecnologia en la Nueva Esfera Pública-CIMUAT, Valencia, 3-4 noviembre, 2010. (ARTIGO). Disponível em: https://www.academia.edu/346332/
Planning the Path and Avoidance Obstacles for Visually Impaired/Blind Peopleiosrjce
This paper suggests a navigation structure for visually impaired/blind people using A star algorithm.
The proposed structure is consisted of two parts. In the first part planning the path and in the second part
avoiding obstacles. The aim is to guidevisually impaired/blind people from a required location to a desired
location. Two tests are run in each part and modeled in C sharp language and MATLABprogram. Simulation
results show good performance for the proposed scheme that the visually impaired/blind people reach the
desired location successfully without error. In conclusion, A star path finding algorithm using C sharp language
was a valid and reliable method to guide the visually impaired / blind people from a required location to a
desired location in indoor environment with or without obstacles.
Campus navigation and identifying current location through androidakanksha_kohale
This presentation will give an idea about a system to guide blind people with the help of android device.It includes the use of GPS. Some previously used techniques to guide blind people.
Nowadays E-Commerce application development is at the most demand due to its radiant features and compatible services that are useful for personal and corporate usage as well. It is easy to imagine that being blind or visually impaired more or less excludes people from using smartphones or tablets. Guide Cane has a dizzying variety of features that help the visually impaired person to access all kinds of information much more easily.
Guide Cane is a navigation guidance system that helps blind users to interact with their devices more easily. This application adds audible feedback to user’s device.
The features of Guide Cane application include call making, message reading that is, both inbox and outbox, location access, check battery percentage, finding route of nearby bus, locating a remote bus etc. The application includes both speech to text and text to speech conversion feature.
This presentation was presented for the paper id 166 in the 18th International Conference on Computer and Information Technology (ICCIT) held at Military Institute of Science & Technology (MIST) on December 21-23, 2015.
Textsl: a screen reader accessible virtual world client for second lifeEelke Folmer
Virtual worlds are not accessible to users who are visually impaired as they lack any textual representation that can be read with a screen reader. We developed an interface modeled after text based adventure games like zork that allows a screen reader user to iteratively interact with the popular virtual world of second life.
DOMINGUES, Diana; HAMDAN, Camila; AUGUSTO, Leci.Biocybrid Body and Rituals in...Camila Hamdan
Congreso Internacional Mujer, Arte y Tecnologia en la Nueva Esfera Pública-CIMUAT, Valencia, 3-4 noviembre, 2010
DOMINGUES, Diana; HAMDAN, Camila; AUGUSTO, Leci. Cuerpo Biocibrido y Rituales en la Vida Urbana Mezclada. In: Congreso Internacional Mujer, Arte y Tecnologia en la Nueva Esfera Pública-CIMUAT, Valencia, 3-4 noviembre, 2010. (ARTIGO). Disponível em: https://www.academia.edu/346332/
Planning the Path and Avoidance Obstacles for Visually Impaired/Blind Peopleiosrjce
This paper suggests a navigation structure for visually impaired/blind people using A star algorithm.
The proposed structure is consisted of two parts. In the first part planning the path and in the second part
avoiding obstacles. The aim is to guidevisually impaired/blind people from a required location to a desired
location. Two tests are run in each part and modeled in C sharp language and MATLABprogram. Simulation
results show good performance for the proposed scheme that the visually impaired/blind people reach the
desired location successfully without error. In conclusion, A star path finding algorithm using C sharp language
was a valid and reliable method to guide the visually impaired / blind people from a required location to a
desired location in indoor environment with or without obstacles.
Campus navigation and identifying current location through androidakanksha_kohale
This presentation will give an idea about a system to guide blind people with the help of android device.It includes the use of GPS. Some previously used techniques to guide blind people.
Nowadays E-Commerce application development is at the most demand due to its radiant features and compatible services that are useful for personal and corporate usage as well. It is easy to imagine that being blind or visually impaired more or less excludes people from using smartphones or tablets. Guide Cane has a dizzying variety of features that help the visually impaired person to access all kinds of information much more easily.
Guide Cane is a navigation guidance system that helps blind users to interact with their devices more easily. This application adds audible feedback to user’s device.
The features of Guide Cane application include call making, message reading that is, both inbox and outbox, location access, check battery percentage, finding route of nearby bus, locating a remote bus etc. The application includes both speech to text and text to speech conversion feature.
This presentation was presented for the paper id 166 in the 18th International Conference on Computer and Information Technology (ICCIT) held at Military Institute of Science & Technology (MIST) on December 21-23, 2015.
Location Finding for blind People Using Voice Navigation Stick SeminarSanjana Vasu
The voice navigation stick used by the blind people makes use of Ultrasonic and IR sensors to detect obstacles, it also makes use of the GPS system to enhance navigation with the help of speakers, that enables the blind to be independent.
Navigation system for blind using GPS & GSMPrateek Anand
Currently, blind people use a traditional cane as a tool for directing them when they move from one place to another. Although, the traditional cane is the most widespread means that is used today by the visually impaired people, it could not help them to detect dangers from all levels of obstacles. In this context, we propose a new intelligent system for guiding individuals who are blind or partially sighted. The system is used to enable blind people to move with the same ease and confidence as a sighted people. The system is linked with a GSM-GPS module to pin-point the location of the blind person and to establish a two way communication path in a wireless fashion. Moreover, it provides the direction information as well as information to avoid obstacles based on ultrasonic sensors. A beeper, an accelerometer sensor and vibrator are also added to the system. The whole system is designed to be small, light and is used in conjunction with the white cane. The results have shown that the blinds that used this system could move independently and safely.
Spatial Gestures using a Tactile-Proprioceptive DisplayEelke Folmer
This presentation describes a novel display technique that appropriates the body to become a display. This display conveys 2D targets in front of the user which can then be manipulated using spatial gestures.
This presentation explores the multimodal sonification of the away count for bingo cards. Results show that supplemental sonification helps reduce player error.
Pet-n-punch is a two handed exergame that can be played by users who are blind. It's gameplay is inspired by whac-a-mole. A user study was performed to see if two handed exergames yield higher active energy expenditure than single handed games.
G4H: game accessibility research @ University of Nevada, RenoEelke Folmer
Invited talk at the Games for Health Conference workshop on game accessibility. This deck of slides discusses some of our research projects at the university of nevada in Reno such as a version of guitar hero that visually impaired can play. Interfaces to popular game genres for severe motor impaired and a virtual world interface that can be accessed with a screen reader.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Location Finding for blind People Using Voice Navigation Stick SeminarSanjana Vasu
The voice navigation stick used by the blind people makes use of Ultrasonic and IR sensors to detect obstacles, it also makes use of the GPS system to enhance navigation with the help of speakers, that enables the blind to be independent.
Navigation system for blind using GPS & GSMPrateek Anand
Currently, blind people use a traditional cane as a tool for directing them when they move from one place to another. Although, the traditional cane is the most widespread means that is used today by the visually impaired people, it could not help them to detect dangers from all levels of obstacles. In this context, we propose a new intelligent system for guiding individuals who are blind or partially sighted. The system is used to enable blind people to move with the same ease and confidence as a sighted people. The system is linked with a GSM-GPS module to pin-point the location of the blind person and to establish a two way communication path in a wireless fashion. Moreover, it provides the direction information as well as information to avoid obstacles based on ultrasonic sensors. A beeper, an accelerometer sensor and vibrator are also added to the system. The whole system is designed to be small, light and is used in conjunction with the white cane. The results have shown that the blinds that used this system could move independently and safely.
Spatial Gestures using a Tactile-Proprioceptive DisplayEelke Folmer
This presentation describes a novel display technique that appropriates the body to become a display. This display conveys 2D targets in front of the user which can then be manipulated using spatial gestures.
This presentation explores the multimodal sonification of the away count for bingo cards. Results show that supplemental sonification helps reduce player error.
Pet-n-punch is a two handed exergame that can be played by users who are blind. It's gameplay is inspired by whac-a-mole. A user study was performed to see if two handed exergames yield higher active energy expenditure than single handed games.
G4H: game accessibility research @ University of Nevada, RenoEelke Folmer
Invited talk at the Games for Health Conference workshop on game accessibility. This deck of slides discusses some of our research projects at the university of nevada in Reno such as a version of guitar hero that visually impaired can play. Interfaces to popular game genres for severe motor impaired and a virtual world interface that can be accessed with a screen reader.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Generative AI Deep Dive: Advancing from Proof of Concept to Production
The User as a Sensor: Navigating Users with Visual Impairments in Indoor Spaces using Tactile Landmarks
1. Found
hallway!
The User as a Sensor:
Navigating Users with Visual Impairments
in Indoor Spaces using Tactile Landmarks
Navid Fallah, Ilias Apostolopoulos, Kostas Bekris, Eelke Folmer
Human Computer Interaction Lab
University of Nevada, Reno
2. Navigation
Humans navigate using:
»Path integration
»Landmark identification
Sighted people primarily rely on vision
Human Computer Interaction Lab
University of Nevada, Reno
3. Users with Visual Impairments
Navigate using compensatory senses (touch,
sounds, smell)
Landmark identification is significantly slower
Reduced mobility & lower quality of life.
Human Computer Interaction Lab
University of Nevada, Reno
4. Navigation Systems
gps
outdoors indoors
✅
Human Computer Interaction Lab
University of Nevada, Reno
5. Indoor Localization Techniques
compass
step
dead reckoning beacons sensors
- inaccurate +accurate +accurate
+cheap -expensive - usability
Human Computer Interaction Lab
University of Nevada, Reno
7. Veering
outdoors indoors
Human Computer Interaction Lab
University of Nevada, Reno
8. Dead Reckoning Localization
Step counter
Compass
door
hallway
error accumulates over time Sync with known landmarks
Human Computer Interaction Lab
University of Nevada, Reno
14. Combining techniques
compass
step
dead reckoning beacons sensors
+cheap +accuracy
compass
step
door
Human Computer Interaction Lab
University of Nevada, Reno
15. Representation
KML 3D model navigable map
Geometry parser
Human Computer Interaction Lab
University of Nevada, Reno
16. Direction provision
Shortest path using A*
Generate Directions:
1.Move to a landmark
2.Turn direction
3.Action on a landmark
Human Computer Interaction Lab
University of Nevada, Reno
17. Direction provision
Shortest path using A*
Generate Directions:
1.Move to a landmark
2.Turn direction
3.Action on a landmark
Human Computer Interaction Lab
University of Nevada, Reno
18. Interface
“Follow the wall to your right
until you reach a hallway
intersection”
Human Computer Interaction Lab
University of Nevada, Reno
19. Interface
“Follow the wall to your right
until you reach a hallway
intersection”
Human Computer Interaction Lab
University of Nevada, Reno
20. Interface
“Follow the wall to your right
Found
until you reach a hallway hallway!
intersection”
Human Computer Interaction Lab
University of Nevada, Reno
21. Interface
“Follow the wall to your right
Found
until you reach a hallway hallway!
intersection”
tap
Human Computer Interaction Lab
University of Nevada, Reno
22. Interface
“Follow the wall to your right
Found
until you reach a hallway hallway!
intersection”
tap
“Turn right into the Hallway”
Human Computer Interaction Lab
University of Nevada, Reno
23. Particle Filters
x particles have location and weight.
Location updated using distribution of error in steps & compass
weights using map information and user input.
Using multiple filters to estimate step length
Human Computer Interaction Lab
University of Nevada, Reno
24. Particle Filters
x particles have location and weight.
Location updated using distribution of error in steps & compass
weights using map information and user input.
Using multiple filters to estimate step length
Human Computer Interaction Lab
University of Nevada, Reno
25. Particle Filters
x particles have location and weight.
Location updated using distribution of error in steps & compass
weights using map information and user input.
Using multiple filters to estimate step length
Human Computer Interaction Lab
University of Nevada, Reno
26. Particle Filters
x particles have location and weight.
Location updated using distribution of error in steps & compass
weights using map information and user input.
Using multiple filters to estimate step length
Human Computer Interaction Lab
University of Nevada, Reno
27. Particle Filters
x particles have location and weight.
Location updated using distribution of error in steps & compass
weights using map information and user input.
Using multiple filters to estimate step length
Human Computer Interaction Lab
University of Nevada, Reno
28. Particle Filters
x particles have location and weight.
Location updated using distribution of error in steps & compass
weights using map information and user input.
Using multiple filters to estimate step length
Human Computer Interaction Lab
University of Nevada, Reno
29. Particle Filters
x particles have location and weight.
Location updated using distribution of error in steps & compass
weights using map information and user input.
Using multiple filters to estimate step length
Human Computer Interaction Lab
University of Nevada, Reno
30. Particle Filters
x particles have location and weight.
Location updated using distribution of error in steps & compass
weights using map information and user input.
Using multiple filters to estimate step length
Human Computer Interaction Lab
University of Nevada, Reno
31. Particle Filters
x particles have location and weight.
Location updated using distribution of error in steps & compass
weights using map information and user input.
Using multiple filters to estimate step length
Human Computer Interaction Lab
University of Nevada, Reno
32. Prior studies
Intersection 20 steps
Door Door 15 steps
landmark metric
Feasibility study with 10 blindfolded users
Follow up study with 8 blindfolded users
»computing directions runtime
»multiple filters for estimating step length
Human Computer Interaction Lab
University of Nevada, Reno
33. User Study
Engineering Building (hallways/labs/offices)
11 paths over two 2 floors
Landmarks (hallway intersection, doors,
watercoolers, floor transitions)
Human Computer Interaction Lab
University of Nevada, Reno
34. Participants
Six users with visual impairments.
3 Female, average age: 51.8 (SD=18.2)
3 Total Blind, 2 legal blind, 1 low vision.
All used a cane for navigation
No cognitive map of Engineering Building
Users followed 11 paths / holding phone in hand
Human Computer Interaction Lab
University of Nevada, Reno
35. Ground Truth
VS
StarGazer Sketchup Model
$2,000 took 3 hours to create
3 days to install
Human Computer Interaction Lab
University of Nevada, Reno
36. Results
Quantitative
»85% of paths completed successfully
»Average error: 1.85 meter
»Door counting had lowest success rate.
Qualitative
»Directions were easy to follow
»Allowed for efficient navigation
»Users liked the system.
»useful feedback on direction provision.
Human Computer Interaction Lab
University of Nevada, Reno
37. Future work
Improve step detection / avoid scuttling
Evaluation in more complex environments
Planning more reliable paths.
Human Computer Interaction Lab
University of Nevada, Reno
38. questions?
Human Computer Interaction Lab
University of Nevada, Reno
Editor's Notes
Hi, I am Eelke Folmer and I'm to present an indoor navigation system for users that are visually that I developed with my two of my graduate students Navid Fallah, Ilias Apostolopoulos and my colleague Kostas Bekris. \n\n
\n
In navigation humans basically use the following two technique: \n\n1) path integration where users update their current position using proprioceptive data. \n2) landmark based identification where users locate themselves by recognizing landmarks that is stored on a map.\nUsing both techniques together can be used for exploring new environments and building a cognitive map of the environment by observing landmarks. \n\nSighted people primarily rely on vision to recognize landmarks. \n
but people with visual impairments must rely on their compensatory senses such as touch, sounds and smell. \nThough for path integration no significant differences in path integration abilities between sighted an blind users have been found, landmark identification and cognitive mapping is significantly slower, which leads to reduced mobility and a lower quality of life for users with visual impairments. \n\n
Several human navigation systems have been developed which can be distinguished into outdoor and indoor systems. \nWhereas outdoor systems typically use GPS for localizing the user, indoor navigation systems must use a different technique as GPS signals cannot be retrieved indoors. \n
Exisitng Indoor navigation systems typically use one of the following three techniques: \n\n1) dead reckoning localization uses low cost sensors, such as a compass and pedometer to update the user’ s position based on observed motion. These techniques are cheap but inaccurate as error in location estimation propagates over time \n2) beacon based systems, embed identifiers such as RFID tags in the environment where a sensor senses a particular identifier upon which the user can be localized. These systems are accurate but often prohibitively expensive to install Though tags themselves are cheap installing them in a large environments such as an airport is not. \n3) sensing based approaches typcially equip the user with a number of sensors, such as camera and locate the user by detecting pre-existing features of indoor spaces that have been recorded on a prior map. Though this technique can be accurate, from a usability point of view it requires the user to carry a number of sensors and computer equipment. For users with visual impairments this is undesirable as they often already carry a cane and assistive devices such as a braille reader.\n
So given that few of these systems have been implemented at a large scale, we basically set out to investigate if investigated \n\nSo our idea was can we design a navigation system that is cheap to implement and easy to use for users with visual impairments? So we started our project by analyzing navigation in indoor environments more closely. \n
\n
So you need accurate localization to be able to avoid veering, e.g. when a user is navigating and deviating from the provided path. But you can argue that veering is less of a problem in indoor environments than it is in outdoor environments as navigation is naturally constrained by the physical environment, such as walls and doors. \nSo you could argue that maybe for indoor navigation super precise localization is not required. \n
From the three indoor locatlization techniques dead reckoning is not very precise but it can be implemented using features that are present in most smartphones, e.g., accelerometers and a compass. So you don’t have to hook up the user with a bunch of sensors or add RFID tags to your environment. \n\nA problem with dead reckoning is that errors in estimating the user’s location propagate and accumulate over time. but this can be avoided by periodically synchronizing the user’s location with known landmarks in the environment, which could be hallway intersections, a door or a water cooler. \n\n \n
Now lets look at how blind people navigate familiar environments using a cognitive map they have of this evnrionment. . So the user navigates and senses their environment with a cane. Okay so the user finds a wall, hey it finds a hallway intersection. \n\nSo the identification of landmarks and in this case tactile landmarks, such as walls, hallway intersections and doors that can be recognized with a cane alread plays a major role in how blind people navigate. \n\n
Now lets look at how blind people navigate familiar environments using a cognitive map they have of this evnrionment. . So the user navigates and senses their environment with a cane. Okay so the user finds a wall, hey it finds a hallway intersection. \n\nSo the identification of landmarks and in this case tactile landmarks, such as walls, hallway intersections and doors that can be recognized with a cane alread plays a major role in how blind people navigate. \n\n
Now lets look at how blind people navigate familiar environments using a cognitive map they have of this evnrionment. . So the user navigates and senses their environment with a cane. Okay so the user finds a wall, hey it finds a hallway intersection. \n\nSo the identification of landmarks and in this case tactile landmarks, such as walls, hallway intersections and doors that can be recognized with a cane alread plays a major role in how blind people navigate. \n\n
Now lets look at how blind people navigate familiar environments using a cognitive map they have of this evnrionment. . So the user navigates and senses their environment with a cane. Okay so the user finds a wall, hey it finds a hallway intersection. \n\nSo the identification of landmarks and in this case tactile landmarks, such as walls, hallway intersections and doors that can be recognized with a cane alread plays a major role in how blind people navigate. \n\n
So we propose a navigation system that seamlessly integrates with how users with visual impairments already navigate and we do this by combining elements of existing indoor navigation systems. \n\nWe’re using deadreckoning because it can be implemtned on a smartphone but we increasing the accuracy of dead reckongin using a sensor / beacon based approach, where we turn the user into a sensor by having them confirm presence of anticipated tactile landmarks along their path. \n\n
For representing our environment we use a 3D model, as that can convey information such as ramps and low ceilings that are impediments to a blind user. Models are created in Sketchup and a simple geometry parser turns this into a navigable 2D map which is more suitable for path planning. \nThe parser extracts navigable space, and identifies landmarks such as doors, slopes, and hallway intersections. \nOther types of landmarks are manually added to the 2d map.\nThe use of 3d models as opposed to 2d models is further motivated as such models are becoming increasingly available on Google Earth. \n
Using the 2D map and a start and target location we compute the shortest path using A-star. \nWe then parse this path into directions which are of the form: \nMove to a landmark where we use strategies such as wall following and door counting. \nTurn directions, or an action on a landmark such as open a door. \n
These directions are provided to the user using synthetic speech. \nUsers provide input to our system by confirming the successful execution of the command by tapping their screen upon which the next direction is provided. \n
These directions are provided to the user using synthetic speech. \nUsers provide input to our system by confirming the successful execution of the command by tapping their screen upon which the next direction is provided. \n
These directions are provided to the user using synthetic speech. \nUsers provide input to our system by confirming the successful execution of the command by tapping their screen upon which the next direction is provided. \n
These directions are provided to the user using synthetic speech. \nUsers provide input to our system by confirming the successful execution of the command by tapping their screen upon which the next direction is provided. \n
These directions are provided to the user using synthetic speech. \nUsers provide input to our system by confirming the successful execution of the command by tapping their screen upon which the next direction is provided. \n
A problem with this approach is that we are dealing with a lot of uncertainty, the sensors we’re using are noisy and there is uncertainty in the user input. So we are using a technique from the field of robotics which is a bayesian filtering called particle filters. Each particle contains an estimation of where the user is located and how probable this estimate is. \nThe particle locations are updated based on a distribution of error in steps detected and compass information. \nWeights are updated based on map information and on the user’s confirmation of landmarks. \nparticles with low weight are replaced with new ones to improve the localization. \nThe particle filters will also allow for self correction, e.g., we can detect whether a user has passed a landmark are able to generate new directions. \nWe use multiple filters as to also incorporating a different estimates for the user’s step length. \nThough this technique is pretty computational expensive but it can run on most recent smartphones. \n
A problem with this approach is that we are dealing with a lot of uncertainty, the sensors we’re using are noisy and there is uncertainty in the user input. So we are using a technique from the field of robotics which is a bayesian filtering called particle filters. Each particle contains an estimation of where the user is located and how probable this estimate is. \nThe particle locations are updated based on a distribution of error in steps detected and compass information. \nWeights are updated based on map information and on the user’s confirmation of landmarks. \nparticles with low weight are replaced with new ones to improve the localization. \nThe particle filters will also allow for self correction, e.g., we can detect whether a user has passed a landmark are able to generate new directions. \nWe use multiple filters as to also incorporating a different estimates for the user’s step length. \nThough this technique is pretty computational expensive but it can run on most recent smartphones. \n
A problem with this approach is that we are dealing with a lot of uncertainty, the sensors we’re using are noisy and there is uncertainty in the user input. So we are using a technique from the field of robotics which is a bayesian filtering called particle filters. Each particle contains an estimation of where the user is located and how probable this estimate is. \nThe particle locations are updated based on a distribution of error in steps detected and compass information. \nWeights are updated based on map information and on the user’s confirmation of landmarks. \nparticles with low weight are replaced with new ones to improve the localization. \nThe particle filters will also allow for self correction, e.g., we can detect whether a user has passed a landmark are able to generate new directions. \nWe use multiple filters as to also incorporating a different estimates for the user’s step length. \nThough this technique is pretty computational expensive but it can run on most recent smartphones. \n
A problem with this approach is that we are dealing with a lot of uncertainty, the sensors we’re using are noisy and there is uncertainty in the user input. So we are using a technique from the field of robotics which is a bayesian filtering called particle filters. Each particle contains an estimation of where the user is located and how probable this estimate is. \nThe particle locations are updated based on a distribution of error in steps detected and compass information. \nWeights are updated based on map information and on the user’s confirmation of landmarks. \nparticles with low weight are replaced with new ones to improve the localization. \nThe particle filters will also allow for self correction, e.g., we can detect whether a user has passed a landmark are able to generate new directions. \nWe use multiple filters as to also incorporating a different estimates for the user’s step length. \nThough this technique is pretty computational expensive but it can run on most recent smartphones. \n
A problem with this approach is that we are dealing with a lot of uncertainty, the sensors we’re using are noisy and there is uncertainty in the user input. So we are using a technique from the field of robotics which is a bayesian filtering called particle filters. Each particle contains an estimation of where the user is located and how probable this estimate is. \nThe particle locations are updated based on a distribution of error in steps detected and compass information. \nWeights are updated based on map information and on the user’s confirmation of landmarks. \nparticles with low weight are replaced with new ones to improve the localization. \nThe particle filters will also allow for self correction, e.g., we can detect whether a user has passed a landmark are able to generate new directions. \nWe use multiple filters as to also incorporating a different estimates for the user’s step length. \nThough this technique is pretty computational expensive but it can run on most recent smartphones. \n
A problem with this approach is that we are dealing with a lot of uncertainty, the sensors we’re using are noisy and there is uncertainty in the user input. So we are using a technique from the field of robotics which is a bayesian filtering called particle filters. Each particle contains an estimation of where the user is located and how probable this estimate is. \nThe particle locations are updated based on a distribution of error in steps detected and compass information. \nWeights are updated based on map information and on the user’s confirmation of landmarks. \nparticles with low weight are replaced with new ones to improve the localization. \nThe particle filters will also allow for self correction, e.g., we can detect whether a user has passed a landmark are able to generate new directions. \nWe use multiple filters as to also incorporating a different estimates for the user’s step length. \nThough this technique is pretty computational expensive but it can run on most recent smartphones. \n
A problem with this approach is that we are dealing with a lot of uncertainty, the sensors we’re using are noisy and there is uncertainty in the user input. So we are using a technique from the field of robotics which is a bayesian filtering called particle filters. Each particle contains an estimation of where the user is located and how probable this estimate is. \nThe particle locations are updated based on a distribution of error in steps detected and compass information. \nWeights are updated based on map information and on the user’s confirmation of landmarks. \nparticles with low weight are replaced with new ones to improve the localization. \nThe particle filters will also allow for self correction, e.g., we can detect whether a user has passed a landmark are able to generate new directions. \nWe use multiple filters as to also incorporating a different estimates for the user’s step length. \nThough this technique is pretty computational expensive but it can run on most recent smartphones. \n
A problem with this approach is that we are dealing with a lot of uncertainty, the sensors we’re using are noisy and there is uncertainty in the user input. So we are using a technique from the field of robotics which is a bayesian filtering called particle filters. Each particle contains an estimation of where the user is located and how probable this estimate is. \nThe particle locations are updated based on a distribution of error in steps detected and compass information. \nWeights are updated based on map information and on the user’s confirmation of landmarks. \nparticles with low weight are replaced with new ones to improve the localization. \nThe particle filters will also allow for self correction, e.g., we can detect whether a user has passed a landmark are able to generate new directions. \nWe use multiple filters as to also incorporating a different estimates for the user’s step length. \nThough this technique is pretty computational expensive but it can run on most recent smartphones. \n
\n
So prior to the user study I present today we did two studies. \nthe first study we conducted with blindfolded users where we evaluated the effectiveness of metric directions versus landmark based directions, where we found that landmark based directions containing fewer but more distinguishable landmarks gave much better performance.\n\nAs the localization was performed offline, a second study with blindfolded users explored runtime direction provision and localization on the phone, where we also explored the use of multiple filters for estimating user’s step length.\n
In this study we tested our system with the intended target demographic. \nStudies were performed in our engineering building. Multiple floor building with hallways, labs, \n\nWe created 11 paths over two different floors. Landmarks included doors, hallway intersections water coolers and floor transitions. This image shows the map of the second floor. \n
We recruited six users with visual impairmetns trhough the local chapter of the national federation of hte blind. \n3 were female and the average age was 51 years. \n3 of these users were totally blind, 2 legally blind and one had low vision. \n\nAll used a cane for navigation. Most importantly none of hte subjects had been in the engineering building before. \n
For ground truth measurment we used a commercial beacon based localization system called stargazer. \nReflective tags are installed in the ceiling and the user wears a camera and computing equipment on a belt to localize the user. This system has an accuracy of about 2cm. From a cost perspective it is interesting to note that the beacon based system took two students 3 days to install and cost $3,000 where our 3d model took one student about 3 hours to create and annotate\n
Users were able to complete 85% of the paths succesfully which we defined as being able to reach the target destination with their cane. \nWe found an average localization error of 1.85 error which seems large but that is within the sensing range of a blind user. \n\nQualitaitve results were acquired through interviews and a questionnaire. \nOverall users found the directions easy to foloow, \n\nPaths that involved counting doors had the lowest succesrate as it becomes more difficult to pick up steps. \n
One thing we would like to investigate in future work is test our system in more complex environmetns, the engineering building has narrow hallways, so it is difficult to get lost. Other environments, for example, a library or airport contain more open spaces where veering will become a larger problem. \n\nOne promising area to address this is with regard to path planning, where currently the system computes the shortest path using A* this may not always be the most reliable\n