The document describes a proposed system called SRAVIP (Smart Robot Assistant for Visually Impaired Persons) that aims to assist visually impaired individuals in navigating indoor environments. SRAVIP includes two subsystems: 1) an initialization system to create an environment map and register users, and 2) a real-time operation system to navigate the mobile robot and communicate with users through speech or text. The robot utilizes sensors and simultaneous localization and mapping to safely guide a registered user to their desired location indoors. The system was tested successfully using a Turtlebot3 robot at a university campus.
This work aims to provide a practical guide to assist students of Computer Science
courses and related fields to conduct a systematic literature review. The steps proposed
in this paper to conduct a systematic review were extracted from a technical report
published by the researcher Bárbara Kitchenham [1] and arranged in a more objective
format, in order to make information more accessible and practical, especially for those
who are having their first contact with this technique.
This work aims to provide a practical guide to assist students of Computer Science
courses and related fields to conduct a systematic literature review. The steps proposed
in this paper to conduct a systematic review were extracted from a technical report
published by the researcher Bárbara Kitchenham [1] and arranged in a more objective
format, in order to make information more accessible and practical, especially for those
who are having their first contact with this technique.
F B ASED T ALKING S IGNAGE FOR B LIND N AVIGATIONIJCI JOURNAL
The major challenge of visually impaired person is
in mobility, object identification and identificati
on of
space around him/her. The proposed RF Based Talking
Signage for Blind Navigation aims to provide a
universal electronic travel guide for the visually
challenged people. This system incorporates a user
friendly and versatile method called “Talking Signa
ge” that is implemented using android devices. The
system uses an Android application in the mobile ph
one which could deliver voice messages about the us
er
environment via a heterogeneous network. It can be
deployed in any dense environment so that blind
persons can fulfill their needs. The primary advant
age of the system compared to other system in the a
rea is
low cost, ease of transport, less power consumption
, lightweight, and it could be utilized by those pe
oples
who are technically challenged. The architecture pr
oposed in this paper clearly shows communication
between a mobile phone and a heterogeneous network
enabled with RF devices. We have implemented the
system in our university environment and the propos
ed system found to be a great success
Visual, navigation and communication aid for visually impaired person IJECEIAES
The loss of vision restrained the visually impaired people from performing their daily task. This issue has impeded their free-movement and turned them into dependent a person. People in this sector did not face technologies revamping their situations. With the advent of computer vision, artificial intelligence, the situation improved to a great extent. The propounded design is an implementation of a wearable device which is capable of performing a lot of features. It is employed to provide visual instinct by recognizing objects, identifying the face of choices. The device runs a pre-trained model to classify common objects from household items to automobiles items. Optical character recognition and Google translate were executed to read any text from image and convert speech of the user to text respectively. Besides, the user can search for an interesting topic by the command in the form of speech. Additionally, ultrasonic sensors were kept fixed at three positions to sense the obstacle during navigation. The display attached help in communication with deaf person and GPS and GSM module aid in tracing the user. All these features run by voice commands which are passed through the microphone of any earphone. The visual input is received through the camera and the computation task is processed in the raspberry pi board. However, the device seemed to be effective during the test and validation.
Methodologies and evaluation of electronic travel aids for the visually impai...journalBEEI
Technological advancements have widely contributed to navigation aids. However, their large-scale adaptation for navigation solutions for visually impaired people haven’t been realized yet. Less participation of the visually impaired subject produces a designer-oriented navigation system which overshadows consumer necessity. The outcome results in trust and safety issues, hindering the navigation aids from really contribute to the safety of the targeted end user. This study categorizes electronic travel aids (ETAs) based on experimental evaluations, highlights the designer-centred development of navigation aids with insufficient participation of the visual impaired community. First the research breaks down the methodologies to achieve navigation, followed by categorization of the test and experimentation done to evaluate the systems and ranks it by maturity order. From 70 selected research articles, 51.4% accounts for simulation evaluation, 24.3% involve blindfolded-sighted humans, 22.9% involve visually impaired people and only 1.4% makes it into production and commercialization. Our systematic review offers a bird’s eye view on ETA development and evaluation and contributes to construction of navigational aids which really impact the target group of visually impaired people.
Eye(I) Still Know! – An App for the Blind Built using Web and AIDr. Amarjeet Singh
This paper proposes eye(I) still know!, a voice control solution for the visually impaired people. The main purpose is even though the blind cannot see they can still know where to go and what to do! Nearby 60% of total blind population across the world is present in India. In a time where no one likes to rely on anyone, this is a small effort to make the blind independent individuals. This can be achieved using wireless communication, voice recognition and image scanning. The application with the use of object identification will priorly inform about the barriers in the path.
The software will use the camera of the device and scan all the obstacles with their corresponding distances from the user. This will be followed by audio instructions through audio output of the device.
This will efficiently direct the user through his/her way.
The aim of this paper is to help the blind people to identify and catch the public transport vehicles with the help of Light Fidelity technology. It is a Navigation aid. When the bus arrives at the bus stand, transmitter in the bus transmits the light signals and receiver in the stick, receives the light signals and a sound signal is generated through the speaker present in the stick. The sound message contains the bus number and the destination of the bus. In addition to this, if the person is absconded or lost, details of the location will be sent to his/her family members by pressing a button. This is made possible with the help of Global System for Mobile (GSM). Finally, presence of water can be detected along the blind person’s path, with the help of water sensors.
The technology is growing vastly. Everyone in humanity has some limitations. One of those limitations is visual disability. So we are here with a system that helps the visually disabled people. The framework here contains object detection with voice assistance within an app and a hardware part attached to the blinds stick for distance calculation. The app is designed to support the blind person to explore freely anywhere he wants. The working of the framework begins by surveilling the situations around the user and distinguishing them utilizing a camera. The app will then detect the objects present in the input video frame by using the SSD algorithm comparing it with the trained model. The video captured is partitioned into grids to detect the object obstacle. In this way, the subtleties of the object detected can be achieved and along with it distance measurement can also be calculated using specific algorithms. A Text to Speech TTS converter is utilized for changing over the data about the object detected into an audio speech format. The framework application passes on the scene which the blind people is going in his her territorial language with the snap of a catch. The technologies utilized here makes the framework execution effective. Sabin Khader | Meerakrishna M R | Reshma Roy | Willson Joseph C "Godeye: An Efficient System for Blinds" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-4 , June 2020, URL: https://www.ijtsrd.com/papers/ijtsrd31631.pdf Paper Url :https://www.ijtsrd.com/engineering/computer-engineering/31631/godeye-an-efficient-system-for-blinds/sabin-khader
VISUALLY IMPAIRED PEOPLE MONITORING IN A SMART HOME USING ELECTRONIC WHITE CANEijcsit
The monitoring of visually impaired people is important in order to help them to travel safely. Then, many
research works implement some travel aids. The proposed techniques are mostly based on the use of a
white cane. This work introduces an electronic white cane based on sensors' technology. The proposed
electronic cane helps its user to detect obstacles within two meters on the ground or in height. Once the
obstacle is detected, the system sends vocal instructions via a Bluetooth headset to alert the person
concerned. The ultrasonic and infrared sensors have been mounted on the white cane in order to provide it
with the necessary intelligence. A raspberry pi performs the processing of the data. The proposed system
also suggests using a mobile application to track the visually impaired in real-time. This application has a
function that allows you to trace the visual patient's route. This is important to detect the possible cause of
damage to patients during their travels. We use Python as programming language for electronic devices.
The mobile application is Android. Though, the WEB application is a REST API developed using Python
and NodeJs. The system is implemented and tested. The result shows the efficacity of the proposed system.
The monitoring of visually impaired people is important in order to help them to travel safely. Then, many
research works implement some travel aids. The proposed techniques are mostly based on the use of a
white cane. This work introduces an electronic white cane based on sensors' technology. The proposed
electronic cane helps its user to detect obstacles within two meters on the ground or in height. Once the
obstacle is detected, the system sends vocal instructions via a Bluetooth headset to alert the person
concerned. The ultrasonic and infrared sensors have been mounted on the white cane in order to provide it
with the necessary intelligence. A raspberry pi performs the processing of the data. The proposed system
also suggests using a mobile application to track the visually impaired in real-time. This application has a
function that allows you to trace the visual patient's route. This is important to detect the possible cause of
damage to patients during their travels. We use Python as programming language for electronic devices.
The mobile application is Android. Though, the WEB application is a REST API developed using Python
and NodeJs. The system is implemented and tested. The result shows the efficacity of the proposed system.
Design and implementation of smart guided glass for visually impaired peopleIJECEIAES
The objective of this paper is to develop an innovative microprocessor-based sensible glass for those who are square measure visually impaired. Among all existing devices in the market, one can help blind people by giving a buzzer sound when detecting an object. There are no devices that can provide object, hole, and barrier information associated with distance, family member, and safety information in a single device. Our proposed guiding glass provides all that necessary information to the blind person’s ears as audio instructions. The proposed system relies on Raspberry pi three model B, Pi camera, and NEO-6M global positioning system (GPS) module. We use TensorFlow and faster region-based convolutional neural network (R-CNN) approach for detection of objects and recognition of family members of the blind man. This system provides voice information through headphones to the ears of the blind person, and facile the blind individual to gain independence and freedom within the indoor and outdoor atmosphere.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
More Related Content
Similar to Paper_39-SRAVIP_Smart_Robot_Assistant_for_Visually_Impaired.pdf
F B ASED T ALKING S IGNAGE FOR B LIND N AVIGATIONIJCI JOURNAL
The major challenge of visually impaired person is
in mobility, object identification and identificati
on of
space around him/her. The proposed RF Based Talking
Signage for Blind Navigation aims to provide a
universal electronic travel guide for the visually
challenged people. This system incorporates a user
friendly and versatile method called “Talking Signa
ge” that is implemented using android devices. The
system uses an Android application in the mobile ph
one which could deliver voice messages about the us
er
environment via a heterogeneous network. It can be
deployed in any dense environment so that blind
persons can fulfill their needs. The primary advant
age of the system compared to other system in the a
rea is
low cost, ease of transport, less power consumption
, lightweight, and it could be utilized by those pe
oples
who are technically challenged. The architecture pr
oposed in this paper clearly shows communication
between a mobile phone and a heterogeneous network
enabled with RF devices. We have implemented the
system in our university environment and the propos
ed system found to be a great success
Visual, navigation and communication aid for visually impaired person IJECEIAES
The loss of vision restrained the visually impaired people from performing their daily task. This issue has impeded their free-movement and turned them into dependent a person. People in this sector did not face technologies revamping their situations. With the advent of computer vision, artificial intelligence, the situation improved to a great extent. The propounded design is an implementation of a wearable device which is capable of performing a lot of features. It is employed to provide visual instinct by recognizing objects, identifying the face of choices. The device runs a pre-trained model to classify common objects from household items to automobiles items. Optical character recognition and Google translate were executed to read any text from image and convert speech of the user to text respectively. Besides, the user can search for an interesting topic by the command in the form of speech. Additionally, ultrasonic sensors were kept fixed at three positions to sense the obstacle during navigation. The display attached help in communication with deaf person and GPS and GSM module aid in tracing the user. All these features run by voice commands which are passed through the microphone of any earphone. The visual input is received through the camera and the computation task is processed in the raspberry pi board. However, the device seemed to be effective during the test and validation.
Methodologies and evaluation of electronic travel aids for the visually impai...journalBEEI
Technological advancements have widely contributed to navigation aids. However, their large-scale adaptation for navigation solutions for visually impaired people haven’t been realized yet. Less participation of the visually impaired subject produces a designer-oriented navigation system which overshadows consumer necessity. The outcome results in trust and safety issues, hindering the navigation aids from really contribute to the safety of the targeted end user. This study categorizes electronic travel aids (ETAs) based on experimental evaluations, highlights the designer-centred development of navigation aids with insufficient participation of the visual impaired community. First the research breaks down the methodologies to achieve navigation, followed by categorization of the test and experimentation done to evaluate the systems and ranks it by maturity order. From 70 selected research articles, 51.4% accounts for simulation evaluation, 24.3% involve blindfolded-sighted humans, 22.9% involve visually impaired people and only 1.4% makes it into production and commercialization. Our systematic review offers a bird’s eye view on ETA development and evaluation and contributes to construction of navigational aids which really impact the target group of visually impaired people.
Eye(I) Still Know! – An App for the Blind Built using Web and AIDr. Amarjeet Singh
This paper proposes eye(I) still know!, a voice control solution for the visually impaired people. The main purpose is even though the blind cannot see they can still know where to go and what to do! Nearby 60% of total blind population across the world is present in India. In a time where no one likes to rely on anyone, this is a small effort to make the blind independent individuals. This can be achieved using wireless communication, voice recognition and image scanning. The application with the use of object identification will priorly inform about the barriers in the path.
The software will use the camera of the device and scan all the obstacles with their corresponding distances from the user. This will be followed by audio instructions through audio output of the device.
This will efficiently direct the user through his/her way.
The aim of this paper is to help the blind people to identify and catch the public transport vehicles with the help of Light Fidelity technology. It is a Navigation aid. When the bus arrives at the bus stand, transmitter in the bus transmits the light signals and receiver in the stick, receives the light signals and a sound signal is generated through the speaker present in the stick. The sound message contains the bus number and the destination of the bus. In addition to this, if the person is absconded or lost, details of the location will be sent to his/her family members by pressing a button. This is made possible with the help of Global System for Mobile (GSM). Finally, presence of water can be detected along the blind person’s path, with the help of water sensors.
The technology is growing vastly. Everyone in humanity has some limitations. One of those limitations is visual disability. So we are here with a system that helps the visually disabled people. The framework here contains object detection with voice assistance within an app and a hardware part attached to the blinds stick for distance calculation. The app is designed to support the blind person to explore freely anywhere he wants. The working of the framework begins by surveilling the situations around the user and distinguishing them utilizing a camera. The app will then detect the objects present in the input video frame by using the SSD algorithm comparing it with the trained model. The video captured is partitioned into grids to detect the object obstacle. In this way, the subtleties of the object detected can be achieved and along with it distance measurement can also be calculated using specific algorithms. A Text to Speech TTS converter is utilized for changing over the data about the object detected into an audio speech format. The framework application passes on the scene which the blind people is going in his her territorial language with the snap of a catch. The technologies utilized here makes the framework execution effective. Sabin Khader | Meerakrishna M R | Reshma Roy | Willson Joseph C "Godeye: An Efficient System for Blinds" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-4 , June 2020, URL: https://www.ijtsrd.com/papers/ijtsrd31631.pdf Paper Url :https://www.ijtsrd.com/engineering/computer-engineering/31631/godeye-an-efficient-system-for-blinds/sabin-khader
VISUALLY IMPAIRED PEOPLE MONITORING IN A SMART HOME USING ELECTRONIC WHITE CANEijcsit
The monitoring of visually impaired people is important in order to help them to travel safely. Then, many
research works implement some travel aids. The proposed techniques are mostly based on the use of a
white cane. This work introduces an electronic white cane based on sensors' technology. The proposed
electronic cane helps its user to detect obstacles within two meters on the ground or in height. Once the
obstacle is detected, the system sends vocal instructions via a Bluetooth headset to alert the person
concerned. The ultrasonic and infrared sensors have been mounted on the white cane in order to provide it
with the necessary intelligence. A raspberry pi performs the processing of the data. The proposed system
also suggests using a mobile application to track the visually impaired in real-time. This application has a
function that allows you to trace the visual patient's route. This is important to detect the possible cause of
damage to patients during their travels. We use Python as programming language for electronic devices.
The mobile application is Android. Though, the WEB application is a REST API developed using Python
and NodeJs. The system is implemented and tested. The result shows the efficacity of the proposed system.
The monitoring of visually impaired people is important in order to help them to travel safely. Then, many
research works implement some travel aids. The proposed techniques are mostly based on the use of a
white cane. This work introduces an electronic white cane based on sensors' technology. The proposed
electronic cane helps its user to detect obstacles within two meters on the ground or in height. Once the
obstacle is detected, the system sends vocal instructions via a Bluetooth headset to alert the person
concerned. The ultrasonic and infrared sensors have been mounted on the white cane in order to provide it
with the necessary intelligence. A raspberry pi performs the processing of the data. The proposed system
also suggests using a mobile application to track the visually impaired in real-time. This application has a
function that allows you to trace the visual patient's route. This is important to detect the possible cause of
damage to patients during their travels. We use Python as programming language for electronic devices.
The mobile application is Android. Though, the WEB application is a REST API developed using Python
and NodeJs. The system is implemented and tested. The result shows the efficacity of the proposed system.
Design and implementation of smart guided glass for visually impaired peopleIJECEIAES
The objective of this paper is to develop an innovative microprocessor-based sensible glass for those who are square measure visually impaired. Among all existing devices in the market, one can help blind people by giving a buzzer sound when detecting an object. There are no devices that can provide object, hole, and barrier information associated with distance, family member, and safety information in a single device. Our proposed guiding glass provides all that necessary information to the blind person’s ears as audio instructions. The proposed system relies on Raspberry pi three model B, Pi camera, and NEO-6M global positioning system (GPS) module. We use TensorFlow and faster region-based convolutional neural network (R-CNN) approach for detection of objects and recognition of family members of the blind man. This system provides voice information through headphones to the ears of the blind person, and facile the blind individual to gain independence and freedom within the indoor and outdoor atmosphere.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
1. (IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 12, No. 7, 2021
345 | P a g e
www.ijacsa.thesai.org
SRAVIP: Smart Robot Assistant for Visually
Impaired Persons
Fahad Albogamy1
Turabah University College, Computer Sciences Program
Taif University, Saudi Arabia
Turk Alotaibi2
, Ghalib Alhawdan3
Mohammed Faisal*4
College of Applied Computer Sciences, King Saud
University, Saudi Arabia
Abstract—Vision is one of the most important human senses,
as visually impaired people encounter various difficulties due to
their inability to move safely in different environments. This
research aimed to facilitate integrating such persons into society
by proposing a robotic solution (Robot Assistance) to assist them
in navigating within indoor environments, such as schools,
universities, hospitals, airports, etc. according to a prescheduled
task. The proposed system is called the smart robot assistant for
visually impaired persons (SRAVIP). It includes two subsystems:
1) an initialization system aimed to initialize the robot, create an
environment map, and register a visual impaired person as a
target object; 2) a real-time operation system implemented to
navigate the mobile robot and communicate with the target
object using a speech-processing engine and an optical character
recognition (OCR) module. An important contribution of the
proposed SRAVIP is the user-independent, i.e. it does not depend
on the user, and one robot can serve unlimited users. SRAVIP
utilized Turtlebot3 robot to realize SRAVIP and then tested it in
the College of Computer and Information Sciences, King Saud
University, AlMuzahmiyah Campus. The experimental results
confirmed that the proposed system could function successfully.
Keywords—Mobile robot; robotics; robot assistance; and
visually impaired persons
I. INTRODUCTION
At present, a large number of persons have various physical
impairments. The World Health Organization (WHO) has
evaluated that according to examination in 2019 approximately
2.2 billion people worldwide live with a form of vision
impairment, among which 36 million people are blind [1].
Visual impairment is a noteworthy issue that prevents from
moving freely. Such people usually require the assistance of
unimpaired individuals to identify and follow a target direction
or to provide guidance while passing stairs and turns. Most
visually impaired individuals cannot determine their path freely
in an unfamiliar zone.
In general, there are many tools and ways to help the blind
people: 1) Traditional ways, e.g. Help by other people, White
Stick, and Guide dogs; 2) Technical-based such as Wearable
device and Navigation Robot.
The recently introduced technologies, such as Blind Stick,
have facilitated them to move freely; however, the
implemented procedures are deemed insufficiently reasonable
and practical. At present, the rapid technical development has
enabled introducing many types of robots, such as ones
intended for the commercial use and delivery. In addition, there
are several applications in hospitals and hotels, for example, a
DoorDash robot employed to deliver food to customers via an
application [2]. The other type of robots includes those that are
similar to humans, such as Sofia designed by the company
Hanson Robotics1
. These robots have been developed to learn
and adapt to human behavior. Other types of existing robots
inhibit various features: they can behave independently,
identify their way to specific locations in the buildings, and so
on.
This study aimed to assist the blind people to reach their
target (classroom in schools/universities, and gates in airports)
using a robot-based solution. Therefore, a solution that utilizes
robotics to assist visually impaired individuals is developed.
Robot-based solutions can be applied to assist visually
impaired and blind people to walk and move in various
environments, such as schools, universities, hospitals, airports,
shopping malls, etc. For example, when a visually impaired
person enters an airport, a mobile robot can be utilized to
enable navigation to a required gate according to the boarding
ticket information. As the other example, we can consider the
situation in which a robot can assist a student in navigating to a
target classroom according to the timetable. The concept of
robotic assistance should be grounded on understanding the
needs of visually impaired individuals. In the present study, we
focus on a task of navigating a visually impaired person to a
predefined location within a particular indoor environment.
In the future, we are planning to enhance the proposed
SRAVIP system by adding more features such as fingerprint or
face recognition, supporting Arabic languages, and enabling
remote maintenance.
The remainder of this paper is organized as follows. In
Section 2, the related literature review is presented. The
proposed system structure is outlined in Section 3. In
Section 4, we describe the experimental setup and discuss the
obtained results. The discussion is provided in Section 5. And
the conclusion provided in section6.
II. LITERATURE REVIEW
Several systems and techniques have been proposed to
assist visually impaired persons by means of a wearable
device, a smart cane, or a robotic assistant.
1
https://www.hansonrobotics.com/sophia/
*Corresponding Author
2. (IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 12, No. 7, 2021
346 | P a g e
www.ijacsa.thesai.org
A. Wearable Devices
Wearable devices have been incorporated in various
modern techniques to assist visually impaired people. In [6], a
smart assistant called Tyflos has been proposed to assist while
walking, navigating, and working in an indoor environment.
Tyflos consists of seven main elements: a pair of glasses,
vision cameras, laser range scanner, microphone, portable
computer, ear speaker, and a communication system. Two
optical cameras are implemented to capture 3D images of a
target environment. The system can operate in a continuous
mode that registers the information about the environment in
video format, converts it into verbal descriptions for each shot,
and provides a user with obtained description. Alternatively, as
an assistant describing the environment upon a user request,
the system provides an ability to investigate surroundings
through verbal communication.
Lee Y. and Medioni [3] have proposed a vest-type gadget
with an embedded RGB-D camera for real-time navigation
with obstacle avoidance (Fig. 1). Here, the RGB-D camera is
used to capture the depth imagery data, and traversability is
implemented to identify free and occupied spaces (obstacles).
Directions, such as "Go straight" and "Stop" are provided to a
user by means of four micro vibration motors on a vest.
One more system that employs a similar vest differently
has been proposed in [4] that is implemented as a stereo vision
system for visually impaired systems (Fig. 2). Using a visual
odometer and feature-based metric-topological and
simultaneous localization and mapping (SLAM), the system
generates a 3D map representation corresponding to the
vicinity of a user that can be utilized for obstacle detection and
traversability map development.
Fig. 1. RGB-D Camera [3].
Fig. 2. Robot Vision [4].
In [16], the researchers have constructed a model aimed to
control movements, perceive protests, and maintain a distance
from obstacles in an indoor environment for visually impaired
individuals. The framework comprises a module with a
headset, a camera, a marker, an inertial estimation unit, and
laser sensors mounted on a user's chest (Fig. 3).
Communication between the system and users is
established through speech recognition and synthesis modules.
The laser sensors registered the data required to maintain a safe
distance. The camera is utilized to capture scenes and send it to
the system for navigation or recognition.
Bai and Wang [5] have developed a low-cost wearable
device aimed to assist visually impaired persons in navigating
and avoiding obstacles based on the dynamic subgoal strategy.
The device is composed of an ultrasonic rangefinder, a depth
camera, a fisheye, an earphone, a pair of see-through glasses,
and an embedded CPU board. The visual SLAM is utilized to
define a virtual trajectory based on the depth and RBG images.
The device identifies a short path by applying the A* algorithm
to a point-of-interest. However, the problem associated with
this device is that ultrasonic sensor data may vary depending
on weather, which may lead to misdirection (Fig. 4).
Fig. 3. Photo of the Prototype [16].
Fig. 4. Navigation Device [5].
3. (IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 12, No. 7, 2021
347 | P a g e
www.ijacsa.thesai.org
B. Smart Cane
Visually impaired individuals may encounter numerous
dangers that may threaten their security, as there are multiple
regular things in their surroundings that may act as
obstructions. Therefore, an inside framework presented in [7]
is considered as a powerful arrangement between a several
segments used at the stage of capturing and handling the data
that are then utilized to define the path and transmit route
messages to a user while moving in an inner zone. This
framework contains key apparatuses including a white stick
equipped with different lights, two infrared cameras, a portable
computer with a product application to manage the whole
framework, and a cell phone to convey the route data to a user
through voice messages (Fig. 5).
Navigating indoor environments for visually impaired
persons is usually difficult; therefore, in [8], the authors have
proposed a system that can assist blind people in identifying
the best path to arrive to a target destination. This system
implies embedding RFID tags within paths and capturing them
by a RFID reader using a cane antenna (Fig. 6).
In [9], a system called AKSHI has been proposed to assist
visually impaired people in walking. The system has been
implemented using Raspberry Pi, a GSM module, ultrasonic
sensor, RFID tags, and a reader. The RFID reader is attached to
the bottom of the cane. In the middle of the cane, an ultrasonic
sensor has been mounted; the box containing Raspberry Pi 2,
GPS, and GSM modules has been mounted above the sensor.
Moreover, a mobile application to track the user location has
been developed. However, the problem associated with this
system is that the utilized sensor cannot function accurately in
dirty or muddy environments.
Muñoz and other authors [10] have proposed a system
using Tango devices within a mobile computing platform and
have implemented a method for obstacle detection based on
TSM-KF using RGB-D camera. The automated map
generation process has been established by developing an
indoor map editor to parse a CAD model of an architectural
floor map, as well as implementing a smart cane as a human-
machine interface and communication tool.
C. Robotics
The authors in [11] have proposed a suitcase-shaped
navigation robot called Cabot to assist visually impaired
persons in navigating to a target destination and avoiding
obstacles. It comprises a stereo camera, a LIDAR sensor, a
tactile handle, a portable computer, batteries, a motorized
wheel, and a clutch. It can be used to receive feedback from a
handle and to provide the information about the required
direction. However, it does not consider dynamic elements. To
enable a robot to move and recognize a target environment,
many techniques and algorithms have been used [17-23]. A
specific framework has been proposed in [12]. Here, the RGB
and depth images are employed to achieve the purpose. The
scheme represented in Fig. 7 can be used to understand the
general structure of this proposed framework, which can be
divided into two sequential stages: the offline work stage
including the establishment of the structure and the electronic
stage in which the model is applied to identify a target item
according to the predefined map.
Fig. 5. Components of the Navigation System [7].
Fig. 6. Navigation Device Attached to a user with a Headphone and a Cane
with a Buil-in RFID Antenna [8].
Fig. 7. Structure of the Proposed System [11].
Jalobeanu [13] has developed a framework based on a
Kinect sensor to map indoor environments. This framework
combines a grid map based on the truncated signed distance
function with a particle filter used in a real-time mode. This
4. (IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 12, No. 7, 2021
348 | P a g e
www.ijacsa.thesai.org
framework exploits the noisy depth data and can be used to
increase the field of view by establishing virtual views from
local maps. In [14], the researchers have combined the LIDAR
and sonar sensors. Such multi-sensor combination techniques
have been applied to identify objects while performing SLAM.
In the case when various sensors are used to analyze a similar
obstacle, they can confirm the obtained information and
improve the estimation precision of each other. Kumar [15] has
proposed a robotic navigation system including two
controllers. The first one is intended to compute the path from
the current position to a target point without considering
potential obstacles. The other controller is based on fuzzy logic
and is utilized to deal with obstacles by processing the input
from a laser sensor and then sending the output to the robot
aiming to avoid the identified obstacle.
III. PROPOSED SYSTEM
In this study, we developed a system called SRAVIP aimed
at integrating visually impaired persons into society. To do
this, we attempted to construct a mobile robotic solution
intended to assist such persons in walking and navigating in an
indoor environment. SRAVIP is aimed at assisting the blind
people to reach the target location (such as classrooms in
schools/universities or gates in airports) using a robotic
solution, as presented in Fig. 8. The proposed SRAVIP has the
following structure:
1) Initialization system (Offline)
a) Map Construction: establishing a map of a target
environment;
b) User Registration: registering a user in the system.
2) Real-time operation system (Online)
a) Robot Navigation and Path Planning: navigating the
robot;
b) User Interface: communicating with a user (through
a speech-processing engine or an OCR system).
An important contribution of the proposed SRAVIP is the
user-independent, i.e. it does not depend on the user, one robot
can serve unlimited users. In this section, we provide an
overview of the proposed system and discuss the interaction
between its components, explaining each component in detail.
A. System Initialization
This phase is implemented to initialize the system. It
includes two steps: constructing the map of a target
environment and registering users in the system.
Map Construction
First, we execute system/robot initialization by determining
where a mobile robot is located with respect to the surrounding
environment (using simultaneous localization and
mapping). Enabling the robot to move from the current
location to the destination one according to the predefined
map, we utilize an encoder, an IMU sensor, and a distance
sensor to avoid obstacles. We will employ a 2D occupancy grid
map (OGM) to construct the map based on the distance
information obtained using the sensors and the pose
information of the robot itself (Fig. 9). Finally, the system
saves the map.
Fig. 8. The Core Components of the Proposed System.
Fig. 9. System Initialization Diagram.
User Registration
In this step, the robot operator registers the information
about a user manually in the database of the target organization
(school, university, airport, etc.). When the registration is
completed, the robot can identify a user when he or she
requires assistance in reaching the target destination by using
ID scan or speech commands.
B. Real Time Operation
This phase is online and is intended to perform robot
navigation, path planning, and user interaction (Fig. 10).
After a user interacts with the robot through telling the ID
number or presenting the ID card, the robot can identify the
user and retrieve the information about this user from the
database (a student timetable or a flight information). Then, it
identifies the target location and specifies the current one
according to the predefined map of the building. The target and
current locations and the map serve as the inputs into the path-
planning algorithm. The robot defines a path aiming to avoid
obstacles and to identify the shortest route. Then, the robot
initiates moving along the defined path. At this time, even if an
unexpected obstacle is suddenly detected, the robot will be able
5. (IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 12, No. 7, 2021
349 | P a g e
www.ijacsa.thesai.org
to avoid it and to proceed moving to the target destination.
During the movement, the current location of the robot is
identified with respect to its environment using simultaneous
localization and mapping. If the robot reaches the destination,
it will go back to its initial location; otherwise, it continues
executing the localization steps to proceed with the operation.
When the robot reaches the initial location, it stops; otherwise,
it continues executing the robot localization step.
Fig. 10. Flowchart of the Real Time Operation Phase.
IV. SRAVIP IMPLEMENTATION
In this section, we describe the implementation of the
proposed SRAVIP, which consists of two parts: the first part is
the initialization of the system (offline), and the second part is
real-time operation (online). In SRAVIP, we have utilized
Turtlebot3 and then, we have tested the proposed system
within the College of Computer and Information Sciences,
King Saud University, AlMuzahmiyah Campus.
Turtlebot3 has the following features: Raspberry Pi 3
Model B and B+; a laser distance sensor (LDS) is a 360 LDS-
01. We have modified it mounting a camera to facilitate
recognizing a university card and a microphone for speech
recognition. We have utilized a robot operation system (ROS)
as a robotic platform. ROS is an open-source robotics software
platform. In addition, we have employed Gazebo and RViz
simulations.
A. Offile Implementation System Initialization Phase
1) Map construction: We construct the map using the
gmapping package, implementing the laser-based
simultaneous localization and mapping (SLAM) technique to
create a 2D occupancy grid map (Fig. 11).
Fig. 11. Building Map of a Corridor of the College.
2) User registration: In the user registration step, we
employ MYSQL as relation database management system to
register users in the system. The entity relationship (ER)
diagram is illustrated in Fig. 12.
Fig. 12. ER Diagram of Blind Registration.
B. Real Time Operation (Online)
In this part, we implement the interaction between a user
and the robot either by scanning a university card through the
OCR system or using the speech recognition system to identify
the university ID number. The real time operation modules
include two main steps: user interaction and robot
navigation/path planning.
6. (IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 12, No. 7, 2021
350 | P a g e
www.ijacsa.thesai.org
1) User interface implementation: In this section, we
explain the details of how a user interacts with the robot
through the developed user interface (UI). To enable this
option, we have implemented the code in Python. In UI
(Fig. 13), a user has two options: to provide a user ID card so
as to identify the ID number by means of the optical character
recognition (OCR) system from the database, or to say the ID
number so that the robot could detect it through the speech
recognition module. If the user is registered in the system
correctly, the id number appears on the screen.
We implement UI using the tkinter package, the standard
Python UI available in the tk graphical user interface toolkit.
Fig. 13. Robot GUI.
2) OCR implementation: Optical character recognition
(OCR) is a technology that can be used to extract the ID
number based on a student card. To do this, we have
implemented several stages, as illustrated in Fig. 14, using
various libraries. The first library is NUMBY that allows
generating arrays and data-followed processes. Then, we have
employed TensorFlow library using a neural network trained
on 500 images. Lastly, we utilized the OpenCV library
intended for executing clustering operations.
Fig. 14. Flowchart of the OCR System.
3) Speech recognition implementation: Speech
recognition includes several stages described in Fig. 15, the
most important step is to represent the relationship between
the language units of speech and voice signals. The other
module is used to match linguistic modeling sounds with
number sequences to allow distinguishing between numbers
that look similar. Using the speech recognition module, the
system allows recognizing the ID number pronounced by a
user. If the user is registered in the system, the ID number is
processed using the database and is represented on the system
screen. According to the ID number, the user data and
schedule are extracted from the database, and the robot
navigates the user to the target location.
Fig. 15. Flowchart of the Speech Recognition System.
4) Robot navigation and path planning: Using the
constructed map and after the student loaded from the
database, SRAVIP can define the class coordinates and
automatically save them in a text file. Thereafter, the system
can process them to set up a coordinate target for navigation,
and the robot automatically executes path planning to move
from the initial location to the requested one. After the robot
reaches the target location, it returns to the initial point.
V. SRAVIP EXPERIMENT AND TESTING
We tested the SRAVIP system within the simulation
environment established in College of Computer and
Information Sciences, King Saud University, AlMuzahmiyah
Campus. In the experiment, we considered the two possible
scenarios of the interaction between the proposed system and
users. In the first scenario, experiment participants utilized the
OCR system, and in the second scenario, the speech
recognition system was employed. The experiment procedure
was specified as follows. When the participant entered the
college building from the main gate, the robot could be found
on the right side of the door, and this location was fixed for a
robot as “Home Adder.” The first scenario implied utilizing the
OCR system (Fig. 16). In the beginning, participant A had to
demonstrate the university card (Fig. 17) to the robot so as to
extract the student ID number.
Fig. 16. Flowchart of the Experimental Scenarios using OCR and Speech
Recognition.
7. (IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 12, No. 7, 2021
351 | P a g e
www.ijacsa.thesai.org
Fig. 17. Extraction of a University ID Number using the OCR System.
Thereafter, the system was able to gain the information
about the student schedule and define the coordinates of a
target class for this user. Then, the coordinates were processed
by the robot navigation system (Fig. 18).
Fig. 18(a) shows that the robot at the Home Adders
location and waiting for the path to the target class (Fig. 18(b))
to be generated by the robot navigation system. Then, the
student goes to his class with assist from the robot (Fig. 18(c)),
finally, the robot automatically returns to home address to be
able to serve other students.
a) Robot at the “Home Adders”
Location.
b) Generated Path to the Class 98.
c) Navigation and Path Planning to the Class 98.
Fig. 18. Robot Navigation System used in the First Scenario.
The second scenario considered employing the speech
recognition system (Fig. 16). In this case, participant B told the
university number to the robot for extraction and identification
of the user information from the database using the speech
recognition technology. After that, the system was able to load
the information about the student schedule and coordinates of a
target class for this user. Then, the coordinates were employed
by the robot navigation system to construct a trajectory
(Fig. 19).
Fig. 19(a) shows that the robot at the Home Adders
location and waiting for the path to the target class (Fig. 19(b))
to be generated by the robot navigation system. Then, the
student goes to his class with assist from the robot (Fig. 19(c)),
finally, the robot automatically returns to home address to be
able to serve other students.
a) Robot at the “Home Adders”
Location.
b) Generated Path to the Class 98.
c) Navigation and Path Planning to the Class 97.
Fig. 19. Robot Navigation System used in the Second Scenario.
VI. DISCUSSION
As we have seen in the literature review, several systems
and techniques have been proposed to assist visually impaired
persons by means of a wearable device, a smart cane, or a
robotic assistant. All those solutions are user-dependent
solution, and work as one to one solution. It means that one
device only can be used by one blind person. An important
contribution of the proposed SRAVIP is the user-independent
property, i.e. it does not depend on the user. It is one to many
solutions, i.e. one robot can serve unlimited number of users.
8. (IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 12, No. 7, 2021
352 | P a g e
www.ijacsa.thesai.org
VII.CONCLUSION
In the present paper, we proposed a robotic solution to
assist visually impaired people to walk and navigate within
various indoor environments, such as schools, universities,
hospitals, airports, etc. according to a prescheduling task. An
important contribution of the proposed system is the user-
independent property, i.e. it does not depend on the user, it is
one to many solutions, i.e. one robot can serve unlimited
number of users. While the previous solutions work as one to
one solutions. It means that one device only used by one blind
person.
The proposed system called a smart robot assistant for
visually impaired persons (SRAVIP) was composed of two
subsystems: the initialization module and the real-time
operation one. We employed Turtlebot3 to implement the
proposed system. Then, we tested it in many scenarios within
the College of Computer and Information Sciences, King Saud
University, AlMuzahmiyah Campus. The experimental result
indicated that the proposed system could be utilized
successfully. In the future, we are planning to enhance the
proposed SRAVIP system by enabling remote maintenance,
adding more features such as fingerprint or face recognition,
speech recognition, and supporting different languages such as
Arabic.
ACKNOWLEDGMENT
The authors extend their appreciation to the Deanship of
Scientific Research at King Saud University for funding this
work through research group no RG-1441-503.
REFERENCES
[1] W. H. Organization, "World report on vision," 2019.
[2] M. S. S. Schaudt, "Delivery robots, a transport innovation for the last
mile Market survey and modelling the logistic system," LITERATURE
SERIES, p. 87, 2018.
[3] Y. H. Lee and G. Medioni, "RGB-D camera based navigation for the
visually impaired," in Proceedings of the RSS, 2011.
[4] V. Pradeep, G. Medioni, and J. Weiland, "Robot vision for the visually
impaired," in 2010 IEEE Computer Society Conference on Computer
Vision and Pattern Recognition-Workshops, 2010, pp. 15-22.
[5] J. Bai, S. Lian, Z. Liu, K. Wang, and D. Liu, "Virtual-blind-road
following-based wearable navigation device for blind people," IEEE
Transactions on Consumer Electronics, vol. 64, pp. 136-143, 2018.
[6] N. G. Bourbakis and D. Kavraki, "An intelligent assistant for navigation
of visually impaired people," in Proceedings 2nd Annual IEEE
International Symposium on Bioinformatics and Bioengineering (BIBE
2001), 2001, pp. 230-235.
[7] L. A. Guerrero, F. Vasquez, and S. F. Ochoa, "An indoor navigation
system for the visually impaired," Sensors, vol. 12, pp. 8236-8258,
2012.
[8] S. Chumkamon, P. Tuvaphanthaphiphat, and P. Keeratiwintakorn, "A
blind navigation system using RFID for indoor environments," in 2008
5th International Conference on Electrical Engineering/Electronics,
Computer, Telecommunications and Information Technology, 2008, pp.
765-768.
[9] V. Kaushalya, K. Premarathne, H. Shadir, P. Krithika, and S. Fernando,
"„AKSHI‟: Automated help aid for visually impaired people using
obstacle detection and GPS technology," International Journal of
Scientific and Research Publications (IJSRP), vol. 6, 2016.
[10] B. Li, J. P. Munoz, X. Rong, Q. Chen, J. Xiao, Y. Tian, A. Arditi, and
M. Yousuf, "Vision-based mobile indoor assistive navigation aid for
blind people," IEEE transactions on mobile computing, vol. 18, pp. 702-
714, 2018.
[11] A. C. Hernández Silva, C. Gómez Blázquez, J. Crespo Herrero, and R. I.
Barber Castaño, "Object detection applied to indoor environments for
mobile robot navigation," 2016.
[12] M. Jalobeanu, G. Shirakyan, G. Parent, H. Kikkeri, B. Peasley, and A.
Feniello, "Reliable kinect-based navigation in large indoor
environments," in 2015 IEEE International Conference on Robotics and
Automation (ICRA), 2015, pp. 495-502.
[13] T. Zhang, Z. Chong, B. Qin, J. Fu, S. Pendleton, and M. Ang, "Sensor
fusion for localization, mapping and navigation in an indoor
environment," in 2014 International Conference on Humanoid,
Nanotechnology, Information Technology, Communication and Control,
Environment and Management (HNICEM), 2014, pp. 1-6.
[14] N. Kumar, M. Takács, and Z. Vámossy, "Robot navigation in unknown
environment using fuzzy logic," in 2017 IEEE 15th International
Symposium on Applied Machine Intelligence and Informatics (SAMI),
2017, pp. 000279-000284.
[15] J. Guerreiro, D. Sato, S. Asakawa, H. Dong, K. M. Kitani, and C.
Asakawa, "CaBot: Designing and Evaluating an Autonomous
Navigation Robot for Blind People," in The 21st International ACM
SIGACCESS Conference on Computers and Accessibility, 2019, pp. 68-
82.
[16] M. L. Mekhalfi, F. Melgani, A. Zeggada, F. G. De Natale, M. A.-M.
Salem, and A. Khamis, "Recovering the sight to blind people in indoor
environments with smart technologies," Expert systems with
applications, vol. 46, pp. 129-138, 2016.
[17] M. Faisal, R. Hedjar, M. Al Sulaiman, and K. Al-Mutib, "Fuzzy logic
navigation and obstacle avoidance by a mobile robot in an unknown
dynamic environment," International Journal of Advanced Robotic
Systems, vol. 10, p. 37, 2013.
[18] M. Faisal, M. Algabri, B. M. Abdelkader, H. Dhahri, and M. M. Al
Rahhal, "Human expertise in mobile robot navigation," IEEE Access,
vol. 6, pp. 1694-1705, 2017.
[19] F. Abdessemed, M. Faisal, M. Emmadeddine, R. Hedjar, K. Al-Mutib,
M. Alsulaiman, and H. Mathkour, "A hierarchical fuzzy control design
for indoor mobile robot," International Journal of Advanced Robotic
Systems, vol. 11, p. 33, 2014.
[20] M. Faisal, K. Al-Mutib, R. Hedjar, H. Mathkour, M. Alsulaiman, and E.
Mattar, "Multi modules fuzzy logic for mobile robots navigation and
obstacle avoidance in unknown indoor dynamic environment," in
Proceedings of the 2013 International Conference on Systems, Control
and Informatics, 2013, pp. 371-379.
[21] M. Faisal, R. Hedjar, M. Alsulaiman, K. Al-Mutabe, and H. Mathkour,
"Robot localization using extended kalman filter with infrared sensor,"
in 2014 IEEE/ACS 11th International Conference on Computer Systems
and Applications (AICCSA), 2014, pp. 356-360.
[22] O. Çatal, T. Verbelen, T. Van de Maele, B. Dhoedt, and A. Safron,
"Robot navigation as hierarchical active inference," Neural Networks,
vol. 142, pp. 192-204, 2021.
[23] C. K. Verginis and D. V. Dimarogonas, "Adaptive robot navigation with
collision avoidance subject to 2nd-order uncertain dynamics,"
Automatica, vol. 123, p. 109303, 2021.