This document provides a summary of approaches to obstacle detection in images. It discusses 6 different techniques: 1) Using 3D grids and 2D grids with stereo vision, 2) A sequence of filters including ground/non-ground segmentation, background subtraction, size and appearance/shape filters, 3) An algorithm using disparity images, 4) Haar-like features with a cascade classifier, 5) Fast human detection using depth images, and 6) Using "equivalent points". Each technique is analyzed in 1-2 paragraphs with figures to illustrate key aspects.
This is a presentation of OBSTACLE AVOIDANCE ROBOT. which has the details on making an obstacle avoider using arduino uno, ultrasonic sensor. This presentation has the detailed description of all the components that are being used in making. And also circuit diagram and flow chart of the robot.
Edgefxkits.com has a wide range of electronic projects ideas that are primarily helpful for ECE, EEE and EIE students and the ideas can be applied for real life purposes as well.
http://www.edgefxkits.com/
Visit our page to get more ideas on popular electronic projects developed by professionals.
Edgefx provides free verified electronic projects kits around the world with abstracts, circuit diagrams, and free electronic software. We provide guidance manual for Do It Yourself Kits (DIY) with the modules at best price along with free shipping.
This is a presentation of OBSTACLE AVOIDANCE ROBOT. which has the details on making an obstacle avoider using arduino uno, ultrasonic sensor. This presentation has the detailed description of all the components that are being used in making. And also circuit diagram and flow chart of the robot.
Edgefxkits.com has a wide range of electronic projects ideas that are primarily helpful for ECE, EEE and EIE students and the ideas can be applied for real life purposes as well.
http://www.edgefxkits.com/
Visit our page to get more ideas on popular electronic projects developed by professionals.
Edgefx provides free verified electronic projects kits around the world with abstracts, circuit diagrams, and free electronic software. We provide guidance manual for Do It Yourself Kits (DIY) with the modules at best price along with free shipping.
HAND GESTURE BASED HOME AUTOMATION FOR VISUALLY CHALLENGEDijiert bestjournal
Rehabilitation engineering is the application of en gineering sciences and technology to improve the quality of life for the people with disabilitie s. A device is designed for the visually challenged people to aid them in operating the home appliances individually. A Microelectromechanical Systems (MEMS) accelerometer is used to sense the accelerations of a hand in motion in three perpendicular directions th at is (x,y,z) and transmitted to wireless protocol using Radio Frequency (RF). The RF signals transmission frequency is 2.25 GHz. The gesture code templates are already stored in the mi crocontroller at the receiver section. The received gestures and the hand gesture shown by the visually challenged is recognized and compared with the templates stored in the receiver. If the templates match the stored templates,then accordingly the home appliances are controlled .
A line follower robot, as the name suggests, is an automated guided vehicle, which follow a visual line embedded on the floor or ceiling.
Usually, the visual line is the path in which the line follower robot goes and it will be a black line on a white surface but the other way (white line on a black surface) is also possible.
Certain advanced line follower robots use the invisible magnetic fields as their paths.
These slides have been made by the members of roboVITics club - The Official Robotics Club of VIT. It deals with the basic concepts related to making a Line Follower Robot.
For details, visit http://maxEmbedded.com/
http://robovitics.in/
Detection and classification of vehicles using stereo visionPiero Micelli
An efficient approach for the detection and classification of road vehicle is described in the presentation.
The developed method, based on “U-V-disparity” concept, is split into three steps: object detection, tracking and classification ( 4 wheels, 2 wheels and truck).
HAND GESTURE BASED HOME AUTOMATION FOR VISUALLY CHALLENGEDijiert bestjournal
Rehabilitation engineering is the application of en gineering sciences and technology to improve the quality of life for the people with disabilitie s. A device is designed for the visually challenged people to aid them in operating the home appliances individually. A Microelectromechanical Systems (MEMS) accelerometer is used to sense the accelerations of a hand in motion in three perpendicular directions th at is (x,y,z) and transmitted to wireless protocol using Radio Frequency (RF). The RF signals transmission frequency is 2.25 GHz. The gesture code templates are already stored in the mi crocontroller at the receiver section. The received gestures and the hand gesture shown by the visually challenged is recognized and compared with the templates stored in the receiver. If the templates match the stored templates,then accordingly the home appliances are controlled .
A line follower robot, as the name suggests, is an automated guided vehicle, which follow a visual line embedded on the floor or ceiling.
Usually, the visual line is the path in which the line follower robot goes and it will be a black line on a white surface but the other way (white line on a black surface) is also possible.
Certain advanced line follower robots use the invisible magnetic fields as their paths.
These slides have been made by the members of roboVITics club - The Official Robotics Club of VIT. It deals with the basic concepts related to making a Line Follower Robot.
For details, visit http://maxEmbedded.com/
http://robovitics.in/
Detection and classification of vehicles using stereo visionPiero Micelli
An efficient approach for the detection and classification of road vehicle is described in the presentation.
The developed method, based on “U-V-disparity” concept, is split into three steps: object detection, tracking and classification ( 4 wheels, 2 wheels and truck).
An embedded system is a computer system designed to do one or a few dedicated and/or specific functions often with real-time computing constraints. It is embedded as part of a complete device often including hardware and mechanical parts. By contrast, a general-purpose computer, such as a personal computer (PC), is designed to be flexible and to meet a wide range of end-user needs. Embedded systems control many devices in common use today.
Physically, embedded systems range from portable devices such as digital watches and MP3 players, to large stationary installations like traffic lights, factory controllers, or the systems controlling nuclear power plants. Complexity varies from low, with a single microcontroller chip, to very high with multiple units, peripherals and networks mounted inside a large chassis or enclosure.
In computer vision and image processing the concept of feature detection refers to methods that aim at computing abstractions of image information and making local decisions at every image point whether there is an image feature of a given type at that point or not. The resulting features will be subsets of the image domain, often in the form of isolated points, continuous curves or connected regions. This lecture teaches you the basics of feature detection.
https://www.udemy.com/learn-computer-vision-machine-vision-and-image-processing-in-labview/?couponCode=SlideShare
Obstacle Avoidance Robot Summer training Presentation Wasi Abbas
i did an extremely hard work on it. I believe that you all my friends will surely get the benefit of this presentation. As a student of B.tech I just wish to assist those who always ready to assist another one. thanks for reading......
Squeezing Deep Learning Into Mobile PhonesAnirudh Koul
A practical talk by Anirudh Koul aimed at how to run Deep Neural Networks to run on memory and energy constrained devices like smart phones. Highlights some frameworks and best practices.
Particle Filter Localization for Unmanned Aerial Vehicles Using Augmented Rea...Ed Kelley
This thesis proposes a system for capturing 3D models of large objects using autonomous quadcopters. A major component of such a system is accurately localizing the position and orientation, or pose, of the quadcopter in order to execute precise flight patterns. This thesis focuses on the design and implementation of a localization algorithm that uses a particle filter to combine internal sensor measurements and augmented reality tag detection in order to estimate the pose of an AR.Drone quadcopter. This system is shown to perform significantly better than integrated velocity measurements alone.
iGUARD: An Intelligent Way To Secure - ReportNandu B Rajan
Using Smartphone control the Door Lock.
LOCKING FEATURE Lock and Unlock the Door Lock by Pattern,PIN or Fingerprint. Also show the status of Door Lock.
OPEN LOCK AUTOMATICALY Automatically open the door when the authenticated devices come near the door using bluetooth and ultrasonic sensor.
VISITOR Capture the image and sent as alert when someone press the calling bell. Provide a provision to unlock the door for the visitor remotely.
FAMILY TRACKING Track the current location of family members.
MESSAGE ALERT Give Broadcast messages to all members or specific members of the family.
THREAT ALERT Alert the family members in any case of threat like someone tries to break the door using vibration sensor. Alert with image captured.
VISITOR'S HISTORY Store the Door accessing history.
Laser scanning for crack detection and repair with robotic weldingFrançois Wieckowiak
Autonomous inspection and repair of critical components in systems that are essential to the functioning of an industry is the next step in automated maintenance. Today, inspections are often carried out manually by expert workers, hence they are time-consuming and resource expensive. This report presents a proof-of-concept robotic system comprised of a UR5 robot equipped with a profile laser scanner based on optical triangulation, capable of automatically scanning a designated area of a part and fully identifying cracks by outputting their locations and mean paths. This information can then be transferred to a robotic welding system for automatic repair of said crack. This proof of concept can only proceed to scans of flat components from a top-view for now, but its parameters (resolution, scanning direction and scanning speed) were extensively tested to find their optimal values for the fastest and most accurate scans. A crack "palette" showing cracks with various widths was laser cut to quantify the precision of the system on which cracks were identified with a mean error of less than 0.2 millimetres with the optimal parameters. This approach is novel as it does not rely on any prior knowledge of the scanned part and paves the way towards developing autonomous inspection and possibly self-repairing critical systems.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
2. Abstract
In this survey, we research about ways of obstacle detection technics in detail. Study will
go through the importance of obstacle detection, various applications which are included
obstacle detection kits and various technics to achieve obstacle detection. Mainly survey
will discuss about three main areas of obstacle detection. Applications which are innovate
to assist visually impaired people, Path planning using obstacle avoidance of mobile robots
and various kinds of autonomous vehicles has built in obstacle detection system. As well
as survey will review main obstacle detection technics such as monocular technics, stereo
technics, depth maps generating technics etc. In the discussion all the technics will be
compared and consider their pros and cons briey. Later part of the survey discusses
about future enhancements and challenges in obstacle detection.
i
3. Acknowledgements
I would like to thank my advisor, Prof. N. D. Kodikara , for his experienced guidance
and constant support throughout the period . I also be obligated to thanks to my family
members and friends, whose support, well wishes and encouragements to brought me
here.
ii
7. 1 Introduction
This survey refers how obstacle detection is done by various application areas. Also it
is discussed about technics which are used in obstacle detection in images. Recently ob-
stacle detection is a quit interesting topic among the researchers. Numerous applications
are innovated dierent areas which are included obstacle detection kits. So survey will
discussed mainly about three areas. Most of the vision systems of mobile robots are
contain an obstacle detection system. Recently robots are hugely involving industrial
production and military activities. Navigating through obstacle lled environments is a
big challenge to mobile robots. So there are so many technics to apply to mobile robots
to make easier the navigating task. As well as there are so many applications which are
using obstacle detection system to assist visually impaired people. Those people suering
from lack of information of their surroundings. So assisting applications detect various
obstacles and analyze and present with details to the user. Now days there are lots of
unmanned vehicles are produced for dierent kind of purpose. Such as military, farming
and whether forecasting elds are using many aircrafts and vehicles which are come up
with obstacle detection systems.
1.1 Motivation
Vision base obstacle detection truly motivated by nature. Most animals gain dierent
kinds of vision systems in nature. As humans we got eyes which providing amazing
images. Eyes are giving us remarkable 3D information to us for obstacle detection.
Flying insects have optimal ows which are giving them absurd navigation and obstacle
detection capabilities.
Actually obstacle detection system is an important sensor in robots and autonomous
vehicle. Because it is a very crucial gadget to have when it in the unknown world. It
may deliver very important information about its surrounding to succeed the navigation.
So vision base obstacle detection directly commit to the safe operation of mobile robots
and autonomous vehicle
1.2 Objectives
Main aspiration of this survey is discuss about pros and cons existing algorithms and
analyze their eciency. Research question of this literature survey is the most suitable
real time technic is for navigating and path planning purposes to mobile robots.
1
8. 2 Background
2.1 Applications
Vision obstacle detection is a very interesting topic in recent past. Because there are so
many applications have vision system these days. Mainly mobile robots have obstacle
detection system[1] [2] [7]. When mobile robot enter to an unknown environment his
vision system is very helpful to do their operations and survive in that environment. As
well as obstacle avoidance is help to planning their path and reach to the destination[1].
Especially military services use these kinds of robots for dangerous operations[2]. As well
as vision system use for create applications to help visually impaired people[3] [4] [7].They
are suering from lack of information of their surroundings. Researchers have invented
applications that can detect obstacles and give them a message where the obstacle is.
Staircases are really challenging to go through for a visually impaired people. So there
are applications for detecting staircase with details[4]. There are so many unmanned
autonomous vehicles are using obstacle detection systems. When there is no one to
control that vehicle has to have a good vision to navigate through the obstacles. In
military services use unmanned aircrafts for spy enemy activities. Those helicopters have
immediate landing emergency in case of dierent situations. Then helicopters have to
nd minimum obstacle area to landing[10]. As well as there are applications that creates
for farming purpose. There is an autonomous vehicle that has a vision system to obstacle
avoidance. It can safely navigate through the fruit trees[11] .
2.2 Errors
When we work with train data there can be some errors obviously. In vision obstacle
detection has mainly two kinds of errors.
2.2.1 False Positive
There can be an error that your obstacle detection system output has an obstacle. But
In the real environment there is nothing. For Example if there is a paper on the oor
system may be detecting it as an obstacle. But actually robot can easily walk on that
paper. So it is false positive behavior.
2.2.2 False Negative
There may be an error that truly environment has an obstacle. But your system not
detects it. It is an false negative error. In those situations your devices can be harmed
because of undetected obstacles.
2
9. 2.2.3 Noise and Non-Linearities
Noise is dened as some random dierences in brightness information of the image. Those
noises are generated in lm grains or electronic noise in your input stereo camera. Re-
searchers use dierent technics to eliminate the Noise and Non-Linearities. Random
Gaussian noise and Kalman Filter are some of them.
2.3 3D versus 2D
Many robots have 2D obstacle detection system. Solving 2D obstacle detection problem
is much easier than solving 3D obstacle detection. 2D detection system always deals with
intensity map at each pixel in the particular picture. It check whether obstacles to be
identied and not identied. Very rst image indicates changes in the scene. But in 3D
obstacle detection system have to picture the whole scene.
3
10. 3 Approaches to obstacle detection
3.1 Obstacle detection using 3D grid and 2D grids.
It is very important to know about its environment for beep robot while it is seeking
path to reach a destination [1].This method will show how robot identies the obstacles
and how plan its path through below mentioned obstacle regions.(Figure 1)
Figure 3.1: Gate, obstacle, oor and bump regions that robots can move through.
Specialty of this method is robot can move through above obstacle area and it can
measure the obstacle. First robot capture images its surrounding and then generates a
3D grid using those images. After 3D grid is converted in to a 2D grid.
3.1.1 Method introduction
There are some assumption in propose method.
1. All the objects in the environment divided into four classes such as oor, obsta-
cle, bump and gate areas. So robot can recognize objects by shape and distance
information.
2. Size and location of the landmarks antecedently. So robot knows its correct coor-
dination.
3. Robot knows his destination in advance. So it plans the path to destination.
Flowchart of the motion planning method is mention below.(Figure 2)
4
11. Figure 3.2: Flowchart of the motion planning method
3.1.2 Stereo Measurement
Robot gets information about environment using a stereo camera. Figure 3 shows images
of the environment. Then it will convert into the 3D model of the environment as shown
in Figure 4 .Then robot can get all the measurements to build the 3D grid.(Figure 5)
Figure 3.3: Robot can see obstacles
Figure 3.4: Takes all measurements
5
12. Figure 3.5: 3D grid
Method called conguration space (c space) used for represent the robot. It considers
the robot as a point without any size. As well as all the obstacles are call conguration
obstacles(c-obstacle) in the environment and they are enlarged based on robot size. Figure
6 shows an example of c- space.
Figure 3.6: C-Space
After get all the 3D measurements it will create a 3D grid map as shown in gure
4.Then it converts to a 2D map.2D map can be classied into several region. Figure 6
shows all classied areas in 2D map.
Those unmeasured area will be reduced by continues learning process. Now onwards
2D grid map recognize as a graph which is including nods and graph. Robot can plan its
path by using graph traversing technics.
6
13. 3.2 Obstacle detection by using sequence of lters
This approach is a combination of stereo and monocular image-based vision algorithm[2]
.Mainly this method use a sequence of lters through the color camera lters. First go
through with the image Figure 7.
Figure 3.7: Implemented series of lters,
Then extract information of the foreground grouped into the blobs. Then those blobs
will be ltered based on size model, appearance model and shape model. Several non-
target blobs can be eliminated by this process and system can focus on possible targets.
Most important thing is this method use series of lters and each model that is learn on
the y from the previous model.
3.2.1 Ground/Non-ground Segmentation.
Since this use for mine discovering robot we have to clearly identify the low- lying obstacles
in the surface. For this purpose we use a correlation-based pyramidal stereo algorithm
to get a threshold. We use the RANSAC algorithm to get an estimate ground surface.
Give a series of 3D stereo points to algorithm and get an estimate value. According to
that value we decide if it is a potential target or not.
3.2.2 Background Subtraction.
Main distraction of the ground is moving objects. For overcome this problem algorithm
create a composite image over the object moving time. Also create before and after the
object is thrown. Analyze all the images and all the detected changes grouped into the
blobs.
3.2.3 Size lter
In this phase it lters according to the known size of the target object. Distance to the
blob will be estimated using stereo.
7
14. 3.2.4 Appearance-based Recognition Filter.
We use only consider about the color. Because target objects are texture less. Robot
doesn't' t know how is the target obstacle looks like in advance. So we have to present
all sides of the object to the robot and train to recognize them.
3.2.5 Shape-based Recognition Filter.
Most eective way to identify objects is its shape. Shape base recognition algorithm use
to identify the object from the object classes.
3.3 0bject collision detection algorithm based on disparity im-
ages.
This technics mainly use for detecting obstacles with their depths[3]. This approach was
used for an application which helps for the visually impaired people[1]. They really want
detailed information about their surroundings. We use disparity image or depth map to
storing the depth of each pixel in the image. Each pixel of the depth map corresponding
to a pixel in the image. Near objects can be seen lighter color and far objects are
darker.Figure 8 shows an example.
Figure 3.8: Depth image
Information about depth maps are stored using 64 gray levels. But there are more
levels have to be added in dierent circumstances such as outdoor scenes or high quality
pictures. Depth of a pixel is calculated by following formula.
z =
f × b
d
z is the depth, f is the focal lengths in pixels, B is the base line in meters and d is the
disparity. Proposed algorithm is detected two levels of objects. One meter depth objects
catch as a near closeness and two meter threshold for rst detection.
Proposed algorithm briey
8
15. 1. Breakdown disparity image with 2D Ensemble Empirical Mode Decomposition.
2. Output disparity images ltered to dispose of higher frequency and noise.
3. Dene two regions of interests (ROIs) to detect nearby objects in two levels. in this
case we got one and two meters.
Finally combine all the information of depth maps and apply into real image to excerpt
real object information.
3.4 Haar-like features with cascade classier
When a visually disable person walking through an area staircase is a major obstacle for
him to go through[4]. So there are application which detect various staircases to assist
them.[1]
Main object of the technic is improving the accuracy of detection. To achieve this
target we have to collect set of positive image set which are containing 4 to 5 steps and
photos should be taken on various conditions. As well as we have to collect set of negative
samples also which not contain any staircase. After that training is done until calcied
into 18 layers. Note that training is performed in fewer steps than a normal application.
Because always good to have a higher detection rate in staircase detection applications.
When detection staircase most of time detect similar regions like staircase and they call
candidate detection. For overcome this false detection, we use cascade classiers create
from training session and the 40 * 40 sub windows with dierent sizes for scanning. Haar
detector creates multiple hits in the staircase region and false detection has single isolated
detection.
Note that Stereo camera system produces very poor results for staircase detection
although it gives very informative results with less false detection rate. That's why we
have to come up with such algorithm.All the stages of the algorithm is shown in Figure
9.
Figure 3.9: Algorithm owchart
9
16. 3.5 Fast Human Detection for Indoor Mobile Robots Using Depth
Images
One of the biggest challenges for mobile robots is detect moving humans[5]. Because
Humans can be appear in dierent postures like walking, sitting. Even some are seen in
partially. So detecting those various kinds of humans is very challenging task.
For this approach we use Microsoft depth camera to detect humans. So depth camera
is gives depth images. This algorithm is scanning the depth images and identies the
human in it. Mainly algorithm has three phases.
3.5.1 Depth image segmentation.
Figure 3.10: Identifying the corresponding regions of distinct object in depth images
Purpose of this step is identifying the corresponding regions of distinct object in depth
images.(Figure 10) But resolution of depth images get poor with the distance. So that
is not necessary to segment those images at larger distance. Accurate way is sub sample
the depth image and then does the segmentation.
10
17. 3.5.2 Region Filtering and Merging
Figure 3.11: Image after omit unnecessary lines
In this phase algorithm take Figure 10 as input and omit unnecessary lines which are
highly unlikely humans and merge into the remaining regions.(Figure 10)
3.5.3 Candidate Classication
In the nal step we use a support vector machine to decide whether the detected object
human or not.
3.6 Obstacle detection using Equivalent points.
Firstly we have to take a photo of an object by using two cameras this object will specify
a particular place for itself in the pictures taken by cameras[6]. We named these points as
equivalent points. If someone takes picture from dierent distance those places no longer
equivalent.
Then we have to convert the image into binary mode by changing value of pixels into
0 and 1. After that we separate equivalent region from others and blacken the rest of
image. We should identify the object which we interested about and draw a box around
them. Since we take those two images using two cameras there are regions that are exist
in rst picture and not exist in second one. We have to cut those regions out. Since we
are using two cameras in two positions to take the photo one might be bigger than other.
So we have to stretch the images and make them standardize to comparable. Researcher
has found three regions that object can exist. In this approach we can tell object whether
exist or not and the region that obstacle exist.
11
18. 3.7 Multi-cue Visual Obstacle Detection.
Visual detection algorithms are mainly base on color and geometric information[7]. Color
base only approaches are copiously gives false negatives and false positives. If there is
a paper of another color on the oor robot will detect it as an obstacle. So it is a false
positive. As well as if there is obstacle which is same color as oor robot will not detect
it. It is false negative. Example instances have been mentioned in below Figure 12.
Figure 3.12: False negative and false positive
Even geometric only approaches can be given many false negative results. This ap-
proach has two kinds of obstacle detectors both color base and geometric base. There are
dierent approaches to detect them. Both algorithms will produce binary images that
are consisting of white areas which represent the obstacles.
3.7.1 Geometric obstacle detection
For do this we have to make an assumption that oor is nearly planar. In the algorithm
two camera images are map in to another by 3d points. Algorithm determines a new
position for each pixel point. Obstacle detection is done wrapping one of the images to
other and comparing two images. But higher lightning conditions algorithm may give
more false positive results.
3.7.2 Color base obstacle detection
First of all we have to take several images of the ground and do the training process.
Training is done selecting three region of the ground and all the result put into a three
dimensional histogram. Classication process divides into two phases. In rst stage check
every pixel with particular value in histogram. If value is less than the threshold detected
as obstacle. After that those pixels are check again as a 4 * 4 pixel block. If those all
pixels are checked as an obstacle then determine as an obstacle.
12
19. 4 Discussion
As I mentioned in chapter 03 there are many approaches to detect obstacle detection
in images. These days there are lots of sensors are using in commercial devices like
unmanned military aircrafts. But those sensors are very expensive. Because of that many
devices have a pair of stereo cameras and most of the time running an ecient algorithm
on stereo images. When I do the literature survey I identied few main obstacle detection
algorithms that often use in applications.
Obstacle detection using depth maps and disparity image is quit famous method
among the researchers. When we got a Disparity images it should be ltered by a proper
algorithm to reduce the noise. In [3] is use 2D EEMD algorithm to lter the noise.
Problem is this method after two layers again it similar to rst layer. Then it is very
dicult to user to identify the place of the object. Other than that user can gain more
details from the depth map. User can determine the shape of the obstacle as well as
the depth of the obstacle. But in high lightning area is not given very accurate results.
Human detection application in [5] also uses depth image segmentation. Since it detect
only human obstacles task is much harder than other approaches. But in the end both
two applications success and obstacle detection is done with minimum error percentage.
In mobile robot path planning application using 3d and 2D grid maps obstacle de-
tection is done successfully than depth maps[1].Because Robot creates a 3D grid map all
over the environment and build a 2d grid map according to that. In this case robot know
not only his front but also he knows what are the objects that around him. Problem is
when not complete the training process there may be some unknown objects around the
robot. Other than that this is very ecient path planning approach to robots.
Object detection using ltering is a not very ecient approach to real time appli-
cations. Because it has many steps and can be given many false negative results. As
well as it can be trained only for very few items at once. This approach use for mine
explore robot in this literature survey[2].In the appearance model obstacle detection done
by color detection. I think it very inecient approach to detect a mine.
Because mines are existing in dierent colors. Shapes of the mines almost same. But
again there can be so many exceptions. When your robot play with mine like dangerous
things it should be 100% present accurate. Otherwise may be destroyed your application.
13
20. 5 Conclusion
As I mention earlier obstacle detection is most important topic in many elds. Among
those elds robotic technology, unmanned aircrafts in military services and Applications
that are helps to blind people are very important. I must say this is a much brought
area in vision and recent past many researchers are involved to this elds to explore new
methods and approaches.
Although have many technics and algorithms to obstacle detection there are some
suitable approaches depends on the application which are use them.
In mobile robot area most of the time use 3D and 2D grid maps [1] and depth images[5].
I think most ecient way of path planning and obstacle avoidance technic is using the
grid maps. Because in grid maps robot train for his current environment and he knows
the all the obstacles. But only disadvantage is when we add a new obstacle in to the
environment robot have to generate all the grid maps again. In depth images robot real
time detect the obstacle and path his plan. But this approach much slower than grid
maps.
There are two approaches mainly discussed in applications about blind people. First
one is using depth images[3]. I think this is a good approach because user can get all
the information that which shape obstacle is and how many meters to the obstacles.
Problems are the accuracy of the algorithm depends on image brightness and pixel size.
Advantage is user can see the obstacle in two depths. When user knows the obstacle
exist in two meters away he has a time to change his path. Other approach is using
Equivalent Points. But this method detects only the obstacle. Depth can't be measured.
So considering these two approaches you can realize using the depth map will be a better
approach.
14
21. 6 Future works
This literature survey has described how obstacle detection done by using a camera.
Besides of that there are so many obstacle detection sensors in the market. But they are
very expensive at the moment. That's why many application running with stereo camera.
For an example LADAR (Laser Detection and Ranging) sensors are using for un-
manned aircrafts for emergency landing. They are scanning the whole area before landing
the aircraft[12]. LADAR system using for determine the distance to an object. Benets
of the LADAR system can get the image of the target while it is calculating the distance
for the target. That functionality can use for obstacle detection purposes.
15
22. References
[1] Atsushi Yamashita, Masaaki Kitaoka, and Toru Kaneko, 2011. Motion Planning of
Biped Robot Equipped with Stereo Camera Using Grid Map. IEEE Transactions on
Robotics and Automation, Vol.5 No.5, 639-647.
[2] Stancil, Brian A., Hyams, Jerey, Shelley, Jordan; Babu, Kartik; Badino, Hernán.;
Bansal, Aayush; Huber, Daniel; Batavia, Parag, 2013. CANINE: a robotic mine
dog. Intelligent Robots and Computer Vision XXX: Algorithms and Techniques.
Proceedings of the SPIE, Volume 8662, article id. 86620L, 12.
[3] Paulo Costaa, Hugo Fernandesb, Paulo Martinsc, João Barrosod, Leontios J. Had-
jileontiadise, (2012). Obstacle detection using stereo imaging to assist the navigation
of visually impaired people. In 4th International Conference on Software Develop-
ment for Enhancing Accessibility and Fighting Info-exclusion. Douro Region, Portu-
gal, July 19-22, 2012. Portugal: Procedia Computer Science. 83-94.
[4] Young Hoon Lee ,Tung-Sing Leung, Gérard Medioni, (2012). Real-time staircase
detection from a wearable stereo system. In Pattern Recognition (ICPR), 2012 21st
International Conference. Tsukuba, 11-15 Nov. 2012. 1-4.
[5] Benjamin Choi, C etin Mericli, Joydeep Biswas, and Manuela Veloso, 2013. Fast
Human Detection for Indoor Mobile Robots Using Depth Images. IEEE International
Conference on Robotics and Automation (ICRA)
[6] Nazli Mohajeri, Roozbeh Raste, Sabalan Daneshvar,(2011). An Obstacle Detection
System for Blind People. In Proceedings of the World Congress on Engineering.
London, U.K., July 6 - 8, 2011. WCE.
[7] Luis J. Manso, Pablo Bustos, Pilar Bachiller and Jos´e Moreno 2010. Multi-cue
Visual Obstacle Detection for Mobile Robots. journal of physical agents, Vol 4, 1 pg
[8] Tomoyuki Mori and Sebastian Scherer, First Results in Detecting and Avoiding
Frontal Obstacles from a Monocular Camera for Micro Unmanned Aerial Vehicles,
International Conference on Robotics and Automation, May, 2013
[9] A. Ess, B. Leibe, K. Schindler, L. van Gool1 , (2009). Moving Obstacle Detection
in Highly Dynamic Scenes. In Robotics and Automation, 2009. ICRA '09. IEEE
International Conference. Kobe, 12-17 May 2009. 56 - 63 pg.
[10] Tarek El-Gaaly, Christopher Tomaszewski, Abhinav Valada, Prasanna Velagapudi,
Balajee Kannan, and Paul Scerri, Visual Obstacle Avoidance for Autonomous Wa-
tercraft using Smartphones, Proceedings of the Autonomous Robots and Multirobot
Systems workshop (ARMS 2013, at AAMAS 2013), May, 2013.
16
23. [11] Tarek El-Gaaly, Christopher Tomaszewski, Abhinav Valada, Prasanna Velagapudi,
Balajee Kannan and Paul Scerri, (2012). A Practical Obstacle Detection System for
Autonomous Orchard Vehicles. In 2012 IEEE/RSJ International Conference. Vilam-
oura, Algarve, Portugal, October 7-12, 2012.. zzz: zz. 3391- 3399 pg.
[12] Sebastian Scherer, Lyle Chamberlain, Sanjiv Singh, 2012. Autonomous landing at
unprepared sites by a full-scale helicopter. Robotics and Autonomous Systems.
[13] Saeid Fazli, Hajar Mohammadi D., Payman Moallem, 2010. An Advanced Stereo Vi-
sion Based Obstacle Detection with a Robust Shadow Removal Technique. SOURCE
World Academy of Science, Engineering Technology, 43 vol, 699p.
[14] 4. Heather Jones, Uland Wong, Kevin Peterson, Jason Koenig, Aashish Sheshadri,
and William (Red) L. Whittaker, Complementary Flyover and Rover Sensing for
Superior Modeling of Planetary Features, Proceedings of the 8th International Con-
ference on Field and Service Robotics, July, 2012.
[15] Kostavelis, I, Nalpantidis, L Gasteratos, A 2009, 'Real-Time Algorithm for Obsta-
cle Avoidance Using a Stereoscopic Camera'. in Third Panhellenic Scientic Student
Conference on Informatics.
17