Object detection is a computer technology related to computer vision and image processing that deals with detecting instances of semantic objects of a certain class (such as humans, buildings, or cars) in digital images and videos.
A Small Helping Hand from me to my Engineering collegues and my other friends in need of Object Detection
A.I based chatbot on healthcare and medical sciencePrashant Gupta
Hello friends , I am Prashant Gupta . I created a presentation on artificial intelligence based chatbot on healthcare and medical science . In this presentation i include all necessary points related chatbot .
If you like this presentation then please press like button, comment your feedback and share .
Thank you
Using AI to Automate and Optimize Media and Entertainment Workloads – Antoine...Amazon Web Services
Find out how companies of all sizes are leveraging AWS services to increase agility, innovation, and to modernize media experiences. Learn how computer vision, object recognition, and conversation engines are changing how media companies engage with consumers.
auto-assistance system for visually impaired personshahsamkit73
The World Health Organization (WHO) reported that there are 285 million visually-impaired people worldwide. Among these individuals, there are 39 million who are totally blind. There have been several systems designed to support visually-impaired people and to improve the quality of their lives. One of the most difficult activities that must be conducted by visually impaired is indoor navigation. In indoor environment, visually impaired should be aware of obstacles in front of them and be able to avoid it. The use of powered wheelchairs with high transportability and obstacle avoidance intelligence is one of the great steps towards the integration of physically disabled and mentally handicapped people. The disable person will not be able to visualize the object so this Auto-assistance system may suffice the requirement. Auto-Assistance System operating in dynamic environments need to sense its surrounding environment and adapt the control signal in real time to avoid collisions and protect the users. Auto-Assistance System that assist or replace user control could be developed to serve for these users, utilizing systems and algorithms from Auto-Assistance robots. This system could be used to assist disable in their mobility by warning of obstacles. The system could be used in indoor environment like hospital, public garden area. So, we are designing an Auto-assistance system which will help the visually impaired person to work independently. In this system we would be detecting the obstruction in the path of visually impaired person using USB Camera & help them to avoid the collisions.
GitHub Link: https://github.com/shahsamkit73/Auto-Assistance-System-for-visually-impaired
Efficient and accurate object detection has been an important topic in the advancement of computer vision systems.
Our project aims to detect the object with the goal of achieving high accuracy with a real-time performance.
In this project, we use a completely deep learning based approach to solve the problem of object detection.
The input to the system will be a real time image, and the output will be a bounding box corresponding to all the objects in the image, along with the class of object in each box.
Objective
Develop a application that detects an object and it can be used for vehicles counting, when the object is a vehicle such as a bicycle or car, it can count how many vehicles have passed from a particular area or road and it can recognize human activity too.
Object detection is a computer technology related to computer vision and image processing that deals with detecting instances of semantic objects of a certain class (such as humans, buildings, or cars) in digital images and videos.
A Small Helping Hand from me to my Engineering collegues and my other friends in need of Object Detection
A.I based chatbot on healthcare and medical sciencePrashant Gupta
Hello friends , I am Prashant Gupta . I created a presentation on artificial intelligence based chatbot on healthcare and medical science . In this presentation i include all necessary points related chatbot .
If you like this presentation then please press like button, comment your feedback and share .
Thank you
Using AI to Automate and Optimize Media and Entertainment Workloads – Antoine...Amazon Web Services
Find out how companies of all sizes are leveraging AWS services to increase agility, innovation, and to modernize media experiences. Learn how computer vision, object recognition, and conversation engines are changing how media companies engage with consumers.
auto-assistance system for visually impaired personshahsamkit73
The World Health Organization (WHO) reported that there are 285 million visually-impaired people worldwide. Among these individuals, there are 39 million who are totally blind. There have been several systems designed to support visually-impaired people and to improve the quality of their lives. One of the most difficult activities that must be conducted by visually impaired is indoor navigation. In indoor environment, visually impaired should be aware of obstacles in front of them and be able to avoid it. The use of powered wheelchairs with high transportability and obstacle avoidance intelligence is one of the great steps towards the integration of physically disabled and mentally handicapped people. The disable person will not be able to visualize the object so this Auto-assistance system may suffice the requirement. Auto-Assistance System operating in dynamic environments need to sense its surrounding environment and adapt the control signal in real time to avoid collisions and protect the users. Auto-Assistance System that assist or replace user control could be developed to serve for these users, utilizing systems and algorithms from Auto-Assistance robots. This system could be used to assist disable in their mobility by warning of obstacles. The system could be used in indoor environment like hospital, public garden area. So, we are designing an Auto-assistance system which will help the visually impaired person to work independently. In this system we would be detecting the obstruction in the path of visually impaired person using USB Camera & help them to avoid the collisions.
GitHub Link: https://github.com/shahsamkit73/Auto-Assistance-System-for-visually-impaired
Efficient and accurate object detection has been an important topic in the advancement of computer vision systems.
Our project aims to detect the object with the goal of achieving high accuracy with a real-time performance.
In this project, we use a completely deep learning based approach to solve the problem of object detection.
The input to the system will be a real time image, and the output will be a bounding box corresponding to all the objects in the image, along with the class of object in each box.
Objective
Develop a application that detects an object and it can be used for vehicles counting, when the object is a vehicle such as a bicycle or car, it can count how many vehicles have passed from a particular area or road and it can recognize human activity too.
deep learning applications in medical image analysis brain tumorVenkat Projects
The tremendous success of machine learning algorithms at image recognition tasks in recent years intersects with a time of dramatically increased use of electronic medical records and diagnostic imaging. This review introduces the machine learning algorithms as applied to medical image analysis, focusing on convolutional neural networks, and emphasizing clinical aspects of the _eld. The advantage of machine learning in an era of medical big data is that signi_cant hierarchal relationships within the data can be discovered algorithmically without laborious hand-crafting of features. We cover key research areas and applications of medical image classi_cation, localization, detection, segmentation, and registration. We conclude by discussing research obstacles, emerging trends, and possible future directions.
This project presents one of the solutions among various others, for operating a computer using hand gestures. It is one of the easiest ways of interaction between human and computer. It is a cost effective model which is only based on Arduino UNO and ultrasonic sensor. The python IDE allows a seamless integration with Arduino UNO in order to achieve different processing and controlling method for creating new gesture control solution.
This Project Aimed at doing a comprehensive study of Different Machine Learning Approaches on Sentiment Analysis of Movie Reviews. Support Vector Machines were the one that Performed Most Accurately with Radial Basis Function. Lots of Other kernel functions and Kernel Parameters were tried to find the optimal one. We achieved accuracy up to 83%.
The Power of Sensors in health & healthcareD3 Consutling
In a series of reports we explore key digital health trends and related opportunities for technology companies, healthcare providers and patients-consumers. We take both an international and Flemish perspective, the latter based on interviews with local stakeholders. In this report we focus on sensor-based applications.
An introduction to the ethics of AI in educationJisc
Presentation slides from Jisc's "an introduction to the ethics of AI in education" event held on 7 December 2021.
This presentation aims:
- To introduce the ethical issues associated with using AI in education
- To explain how ethical issues can be avoided, managed, mitigated and/or overcome
- To introduce you to the Ethical Framework for AI in Education and the Pathway to Ethical AI
Visual, navigation and communication aid for visually impaired person IJECEIAES
The loss of vision restrained the visually impaired people from performing their daily task. This issue has impeded their free-movement and turned them into dependent a person. People in this sector did not face technologies revamping their situations. With the advent of computer vision, artificial intelligence, the situation improved to a great extent. The propounded design is an implementation of a wearable device which is capable of performing a lot of features. It is employed to provide visual instinct by recognizing objects, identifying the face of choices. The device runs a pre-trained model to classify common objects from household items to automobiles items. Optical character recognition and Google translate were executed to read any text from image and convert speech of the user to text respectively. Besides, the user can search for an interesting topic by the command in the form of speech. Additionally, ultrasonic sensors were kept fixed at three positions to sense the obstacle during navigation. The display attached help in communication with deaf person and GPS and GSM module aid in tracing the user. All these features run by voice commands which are passed through the microphone of any earphone. The visual input is received through the camera and the computation task is processed in the raspberry pi board. However, the device seemed to be effective during the test and validation.
deep learning applications in medical image analysis brain tumorVenkat Projects
The tremendous success of machine learning algorithms at image recognition tasks in recent years intersects with a time of dramatically increased use of electronic medical records and diagnostic imaging. This review introduces the machine learning algorithms as applied to medical image analysis, focusing on convolutional neural networks, and emphasizing clinical aspects of the _eld. The advantage of machine learning in an era of medical big data is that signi_cant hierarchal relationships within the data can be discovered algorithmically without laborious hand-crafting of features. We cover key research areas and applications of medical image classi_cation, localization, detection, segmentation, and registration. We conclude by discussing research obstacles, emerging trends, and possible future directions.
This project presents one of the solutions among various others, for operating a computer using hand gestures. It is one of the easiest ways of interaction between human and computer. It is a cost effective model which is only based on Arduino UNO and ultrasonic sensor. The python IDE allows a seamless integration with Arduino UNO in order to achieve different processing and controlling method for creating new gesture control solution.
This Project Aimed at doing a comprehensive study of Different Machine Learning Approaches on Sentiment Analysis of Movie Reviews. Support Vector Machines were the one that Performed Most Accurately with Radial Basis Function. Lots of Other kernel functions and Kernel Parameters were tried to find the optimal one. We achieved accuracy up to 83%.
The Power of Sensors in health & healthcareD3 Consutling
In a series of reports we explore key digital health trends and related opportunities for technology companies, healthcare providers and patients-consumers. We take both an international and Flemish perspective, the latter based on interviews with local stakeholders. In this report we focus on sensor-based applications.
An introduction to the ethics of AI in educationJisc
Presentation slides from Jisc's "an introduction to the ethics of AI in education" event held on 7 December 2021.
This presentation aims:
- To introduce the ethical issues associated with using AI in education
- To explain how ethical issues can be avoided, managed, mitigated and/or overcome
- To introduce you to the Ethical Framework for AI in Education and the Pathway to Ethical AI
Visual, navigation and communication aid for visually impaired person IJECEIAES
The loss of vision restrained the visually impaired people from performing their daily task. This issue has impeded their free-movement and turned them into dependent a person. People in this sector did not face technologies revamping their situations. With the advent of computer vision, artificial intelligence, the situation improved to a great extent. The propounded design is an implementation of a wearable device which is capable of performing a lot of features. It is employed to provide visual instinct by recognizing objects, identifying the face of choices. The device runs a pre-trained model to classify common objects from household items to automobiles items. Optical character recognition and Google translate were executed to read any text from image and convert speech of the user to text respectively. Besides, the user can search for an interesting topic by the command in the form of speech. Additionally, ultrasonic sensors were kept fixed at three positions to sense the obstacle during navigation. The display attached help in communication with deaf person and GPS and GSM module aid in tracing the user. All these features run by voice commands which are passed through the microphone of any earphone. The visual input is received through the camera and the computation task is processed in the raspberry pi board. However, the device seemed to be effective during the test and validation.
An assistive model of obstacle detection based on deep learning: YOLOv3 for v...IJECEIAES
The World Health Organization (WHO) reported in 2019 that at least 2.2 billion people were visual-impairment or blindness. The main problem of living for visually impaired people have been facing difficulties in moving even indoor or outdoor situations. Therefore, their lives are not safe and harmful. In this paper, we proposed an assistive application model based on deep learning: YOLOv3 with a Darknet-53 base network for visually impaired people on a smartphone. The Pascal VOC2007 and Pascal VOC2012 were used for the training set and used Pascal VOC2007 test set for validation. The assistive model was installed on a smartphone with an eSpeak synthesizer which generates the audio output to the user. The experimental result showed a high speed and also high detection accuracy. The proposed application with the help of technology will be an effective way to assist visually impaired people to interact with the surrounding environment in their daily life.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
1. Department of Electronics Engineering
Department of Electronics Engineering, VESIT 2022- 23
Sr No Group Member Roll No
1. Sejal Bishoyi 11
2. Jaykumar Kabra 26
3. Swaraj Patil 44
4. Om Suwarnakar 62
THIRD EYE
Name of Guide: Mr. Abhijeet Shete
D11A Group No: G11-7
2. 1. Today, machine learning is used in many types of industries from medical image
processing to autonomous cars. Detecting objects in images has also become one of the
important research areas and now computers are able to not only detect objects but also
are able to draw bounding boxes on it. This is also known as computer vision.
2. Therefore , we proposed the implementation of computer vision machine learning
ssssalgorithms to detect objects.
Department of Electronics Engineering
Department of Electronics Engineering, VESIT 2022- 23
Why This Project?
3. Department of Electronics Engineering
Department of Electronics Engineering, VESIT 2022- 23
Why This Project?
The World Health Organization (WHO) reported that at least 2.2 billion people
worldwide have a visual impairment or blindness
1. 1.09 billion people, over the age of 35, suffer from visual impairment .
1. Assistive devices have been used for the blind and visually impaired people to
overcome various physical, social, infrastructural, and accessibility barriers to
independence and to live active, productive, and independent lives as equal
members of the society
4. Department of Electronics Engineering
Department of Electronics Engineering, VESIT 2022- 23
Introduction
1. Vision is one of the very essential human senses and it plays the most important role in
human perception about the surrounding environment.
2. Detecting objects in images has also become one of the important research areas and now
computers are able to not only detect objects but also are able to draw bounding boxes on
it. This is also known as computer vision.
3. We proposed the implementation of computer vision machine learning algorithms for
object detection.
5. Department of Electronics Engineering
Department of Electronics Engineering, VESIT 2022- 23
Literature Review
1.2019 6th IEEE International Conference on Engineering Technologies
and Applied Sciences (ICETAS)
Object Detection and Narrator for Visually Impaired People.
This paper explains how convolution neural networks are
trained on ImageNet dataset that can detect objects and
narrate detected objects information to the visually impairs
person. This implementation can be used with any device
using a camera that includes computers, tablets and mobile
phones.
6. Department of Electronics Engineering
Department of Electronics Engineering, VESIT 2022- 23
Literature Review
2.International Journal of Engineering Research & Technology (IJERT)
Vol. 9 Issue 09, September-2020
Assistive Object Recognition System for Visually Impaired
This paper proposed to aid the visually impaired by introducing a system that is most
feasible, compact, and cost-effective. So, the paper
implied a system that makes use of Raspberry Pi in which
you only look once (YOLO v3) machine learning
algorithm trained on the coco database is applied.
7. Department of Electronics Engineering
Department of Electronics Engineering, VESIT 2022- 23
Literature Review
3.2020 IEEE Region 10 Symposium (TENSYMP), 5-7 June 2020, Dhaka,
Bangladesh
Assistive Technology for Visually Impaired using Tensorflow Object in
Raspberry Pi and Coral USB Accelerator
This paper aims to develop an assistive technology
based on Computer Vision, Machine Learning and
Tensorflow to support visually impaired people.
The proposed system will allow the users to
navigate independently using real-time object detection and identification.
8. Department of Electronics Engineering
Department of Electronics Engineering, VESIT 2022- 23
Literature Review
4.European Journal of Molecular & Clinical Medicine ISSN 2515-8260
Volume 7, Issue 4, 2020
Real Time Object Detector for Visually Impaired using OPEN
The goal of the present project is to model an object detector to detect objects for visually
impaired people and other commercial purposes by
recognizing the objects at a particular distance.This paper
propose a computer vision concept to convert objects to
text by importing the pre-trained dataset model from the
caffemodel framework and the texts are further converted
into speech.
9. Department of Electronics Engineering
Department of Electronics Engineering, VESIT 2022- 23
Hardware/Software
Requirements
Hardware:
Integrated Camera
Integrated Speaker
Software:
MATLAB & Simulink
Libraries:
1. Computer Vision Toolbox
2. Computer vision Toolbox Model for
Mask R-CNN Instance Segmentation
3. Deep Learning Toolbox
4. Deep Learning Toolbox For ResNet-50
Network
5. Image Processing Toolbox
6. MATLAB Support Package For USB
Webcams
10. Department of Electronics Engineering
Department of Electronics Engineering, VESIT 2022 -23
Flowchart
11. Department of Electronics Engineering
Department of Electronics Engineering, VESIT 2022- 23
Block Diagram
12. Department of Electronics Engineering
Department of Electronics Engineering, VESIT 2022- 23
Working
1. Initially,the images are captured through camera and these images are send to the
pretrained model.
2. A machine learning model (resnet50-coco) detect the image and find the objects in
the image .
3. At the backend,image processing takes place where necessary processes such as
features detection,features extraction ,etc. takes place.
4. Once the object is detected,the output is send to the user in the form of audio.
13. Department of Electronics Engineering
Department of Electronics Engineering, VESIT 2022- 23
Courses
Matlab Onramp
Modules - 14
Status - Completed
Matlab Machine Learning
Modules - 6
Status - Completed
Matlab Deep learning
Modules - 13
Status-Completed
Matlab Image Processing
Modules - 11
Status - Completed
14. Department of Electronics Engineering
Department of Electronics Engineering, VESIT 2022- 23
Plan of Implementation
MONTH PLAN
August 2022 Deciding the topic and researching
September 2022 Completing the courses required for the project
October 2022 Adding the image data and image processing
November 2022 Completion of the project (Software)
15. Department of Electronics Engineering
Department of Electronics Engineering, VESIT 2022- 23
Applications
1. Tracking objects :
2. People Counting :
3. Automated CCTV :
16. Department of Electronics Engineering
Department of Electronics Engineering, VESIT 2022- 23
Applications
4. Person Detection :
4. Vehicle Detection :
18. Department of Electronics Engineering
Department of Electronics Engineering, VESIT 2022- 23
Result
Accuracy Graph Confusion Matrix
19. Department of Electronics Engineering
Department of Electronics Engineering, VESIT 2022- 23
Conclusion
In this paper we proposed a system which can be used to detect objects in various fields
like medical science , automobile industries and can even be used to assist the visually
impaired person in understanding the environment by narrating the objects in the
surrounding. The developed system is based on using a MATLAB which on loading takes
the image from the camera and pass that image to the server. On server side, a trained
machine learning model is deployed to detect the objects in that image. The result of
detection is passed to the client where a voice library narrates the results to visually
impaired person.
20. Department of Electronics Engineering
Department of Electronics Engineering, VESIT 2022- 23
References
1. World Health Organization Visual Impairment and Blindness. [(accessed on 24 January 2016)]. Available online:
http://www.Awho.int/mediacentre/factsheets/fs282/en/
2. American Foundation for the Blind. [(accessed on 24 January 2016)]. Available online: http://www.afb.org/
3. National Federation of the Blind. [(accessed on 24 January 2016)]. Available online: http://www.nfb.org/
4. Velázquez R. Wearable and Autonomous Biomedical Devices and Systems for Smart Environment. Springer; Berlin/Heidelberg, Germany:
2010. Wearable assistive devices for the blind; pp. 331–349. [Google Scholar]
5. Baldwin D. Wayfinding technology: A road map to the future. J. Vis. Impair. Blind. 2003;97:612–620. [Google Scholar]
6. Blasch B.B., Wiener W.R., Welsh R.L. Foundations of Orientation and Mobility. 2nd ed. AFB Press; New York, NY, USA: 1997. [Google
Scholar]
7. Shah C., Bouzit M., Youssef M., Vasquez L. Evaluation of RUNetra tactile feedback navigation system for the visually-impaired; Proceedings
of the International Workshop on Virtual Rehabilitation; New York, NY, USA. 29–30 August 2006; pp. 72–77. [Google Scholar]
8. Hersh M.A. International Encyclopedia of Rehabilitation. CIRRIE; Buffalo, NY, USA: 2010. The Design and Evaluation of Assistive Technology
Products and Devices Part 1: Design. [Google Scholar]
9.who.int
…
21. Department of Electronics Engineering
Department of Electronics Engineering, VESIT 2022- 23
Thank you